Unable to upload my application to Meta Quest (App Lab) - facebook

I developed an app and wanted to upload it to the facebook Quest(App Lab) platform, but I tried many methods and failed to upload it
Here's what I've tried and failed
The upload button in the app is grayed out, and the MQDH cannot be uploaded
Using the drag and drop upload function of the latest version of MQDH, it prompts that the CLI tool does not exist or has no permission
Using the ovr-platform-util command line upload method, it prompts "An unexpected error occurred"
Here is the log content:
Server log: {
"app_id": 8618231871550973,
"client": "COMMAND_LINE",
"log_level": "ERROR",
"event_name": "UNEXPECTED_ERROR",
"stack_trace": "RangeError [ERR_OUT_OF_RANGE]: The value of \"offset\" is out of range. It must be >= 0 and <= 1. Received 2\n at new NodeError (node:internal/errors:371:5)\n at boundsError (node:internal/buffer:86:9)\n at Buffer.readUInt16LE (node:internal/buffer:243:5)\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\n at Object.set extra [as extra] (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at p (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at Object.get entries [as entries] (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at Object.getEntries (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js",
"extra": "{\"caught\":true,\"os\":\"{\\\"platform\\\":\\\"darwin\\\",\\\"arch\\\":\\\"x64\\\",\\\"type\\\":\\\"Darwin\\\"}\",\"cli_version\":\"1.81.0.000001\",\"compatibility_version\":2,\"session_id\":\"8618231871550973_2022-12-13T07:26:42.217Z\",\"command\":\"upload-quest-build\",\"app_id\":8618231871550973,\"platform\":\"ANDROID_6DOF\"}",
"platform": "ANDROID_6DOF",
"cli_version": "1.81.0.000001",
"session_id": "8618231871550973_2022-12-13T07:26:42.217Z",
"binary_id": ""
}
Server log: {
"app_id": 8618231871550973,
"client": "COMMAND_LINE",
"log_level": "DEBUG",
"event_name": "COMMAND_FAILED",
"stack_trace": "at Object._log (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\nat Object.debug (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\nat /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\nat processTicksAndRejections (node:internal/process/task_queues:96:5)",
"extra": "{\"code\":\"ERR_OUT_OF_RANGE\",\"os\":\"{\\\"platform\\\":\\\"darwin\\\",\\\"arch\\\":\\\"x64\\\",\\\"type\\\":\\\"Darwin\\\"}\",\"cli_version\":\"1.81.0.000001\",\"compatibility_version\":2,\"session_id\":\"8618231871550973_2022-12-13T07:26:42.217Z\",\"command\":\"upload-quest-build\",\"app_id\":8618231871550973,\"platform\":\"ANDROID_6DOF\"}",
"platform": "ANDROID_6DOF",
"cli_version": "1.81.0.000001",
"session_id": "8618231871550973_2022-12-13T07:26:42.217Z",
"binary_id": ""
}

Related

Swift Autofill Password not working: "failed to approve"

I want to implement Password Autofill but I do not get it working. It fails with the following error:
Error Domain=NSOSStatusErrorDomain Code=-25293 ""beezleapp.com" failed to approve "N4EV3W64CV.com.beezleapp.beezle"" UserInfo={numberOfErrorsDeep=0, NSDescription="beezleapp.com" failed to approve "N4EV3W64CV.com.beezleapp.beezle"}
For this code:
SecAddSharedWebCredential(
"beezleapp.com" as CFString,
emailPw.email as CFString,
pw
)
This are my associated domains:
Bundle identifier:
Team ID:
This is my AASA file:
{
"applinks": {
"details": [
{
"appID": "N4EV3W64CV.com.beezleapp.beezle",
"paths": ["*"]
}
]
},
"webcredentials":{
"apps":[
"N4EV3W64CV.com.beezleapp.beezle"
]
},
"appclips":{
}
}
It is available at:
$ curl https://beezleapp.com/.well-known/apple-app-site-association
$ curl https://beezleapp.com/apple-app-site-association
What am I missing?
Turned out it was a DNS caching issue on the device side. I turned on Associated Domains Development in Settings -> Developer. Then it worked instantly.

MongoDB Replica Set - The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid

This is concerning the Azure Deployment Template for a MongoDB Replica Set defined here mongodb-replica-set-centos.
When I run the recommended deployment commands to deploy the replica set, namely
az group create --name <resource-group-name> --location <resource-group-location> # Use this command when you need to create a new resource group for your deployment.
az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json
where the resource group is already set up. I receive the following error:
{
"status": "Failed",
"error": {
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "Conflict",
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}"
},
{
"code": "Conflict",
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}"
}
]
}
}
The problem field is in both primary-resources.json and secondary-resources.json appears to be
"variables": {
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('subnet').vnet, parameters('subnet').name)]",
"securityGroupName": "[concat(parameters('namespace'), parameters('vmbasename'), 'nsg')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[concat('/home/', parameters('adminUsername'), '/.ssh/authorized_keys')]",
"keyData": "[parameters('adminPasswordOrKey')]"
}
]
}
}
},
And ascociated with the variable adminPasswordOrKey. I have tried changing this to be both standard passwords and SSH keys of varying bit-depth, no luck...
How can I fix this?
Repro steps
Run az group create --name <resource-group-name> --location <resource-group-location> where resource group exists.
Run az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json and step through the prompts
Enter the relevant in formation.
Further Investigation
I have just seen this answer (https://stackoverflow.com/a/60860498/626442) saying specifically that
Note: Please note that the only allowed path is /home//.ssh/authorized_keys due to a limitation of Azure.
I have changed this value of the path, no joy, same error. :'[
You forgot to pass parameters in az deployment group create .... --parameters azuredeploy.parameters.json. You can download azuredeploy.parameters.json and change values as needed. See https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-parameter-file?tabs=azure-cli#deploy-template for details.
Specifically the error in the question complains about adminUsername parameter being empty. Bear in mind this user name is also being used in the home directory path, so limit yourself to lowcase ASCII a-z, numbers, underscore. No spaces, not special characters, no utf.
Not related to the error, but be aware these necromancers use mongo 3.2 which was buried 4 years ago: https://www.mongodb.com/support-policy/lifecycles. Considering they open it wide to the internet you may have way more problems if you actually deploy it.
UPDATE
An example of the parameters I used:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"value": "yellow"
},
"mongoAdminUsername": {
"value": "phrase"
},
"mongoAdminPassword": {
"value": "settle#SING"
},
"secondaryNodeCount": {
"value": 2
},
"sizeOfDataDiskInGB": {
"value": 2
},
"dnsNamePrefix": {
"value": "written"
},
"centOsVersion": {
"value": "7.7"
},
"primaryNodeVmSize": {
"value": "Standard_D1_v2"
},
"secondaryNodeVmSize": {
"value": "Standard_D1_v2"
},
"zabbixServerIPAddress": {
"value": "Null"
},
"adminPasswordOrKey": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdNRTU0XF3xazhhDmwXXWGG7wp4AaQC1r89K7sZFRXp9VSUtydV59DHr67mV5/0DWI5Co1yWK713QJ00BPlBIHNMNLuoBBq8IkOx8fBZF1g9YFm5Zy4ay+CF4WgDITAsyxhKvUWL6jwG5M3XIdVYm49K+EFOCWSSaNtCk8tHhi3v6/5HFkwc2r0UL/WWWbbt5AmpJ8QOCDk/x+XcgCjP9vE5jYYGsFz9F6V1FdOpjVfDwi13Ibivj/w2wOZh2lQGskC+qDjd2upK13+RfWYHY3rr+ulNRPckHRhOqmZ2vlUapO4T0X9mM6ugSh1FprLP5nHdVCUls2yw4BAcSoM9NMiyafE56Xkp9h3bTAfx5Ufpe5mjwQp+j15np1pVpwDaEgk7ZeaPoZPhbalpvZGyg9KiKfs9+KUYHfGklIOHKJ3RUoPE286rg1U4LGswil5RARRSf86kBBHXaIPxy1X0N6QryeWhk0aM6LWEdl7mVbQksa7ilANnsaVMl7FSdY/Cc="
}
}
}
DANGER: It will deploy publicly accessible mongodb replica set with publicly accessible credentials, so please delete the resources as soon as you are happy with testing/debugging
This is how deployment looks like on the portal:

LDAP passport strategy for Hyperledegr composer

I have been trying to use LDAP passport strategy for authentication in hyperledger composer rest server. I am using below configuration for ldap passport:
export COMPOSER_PROVIDERS='{
"ldap": {
"provider":"ldap",
"authScheme":"ldap",
"module":"passport-ldapauth",
"authPath":"/auth/ldap",
"successRedirect":"/",
"failureRedirect":"/",
"server":"{
"url":"ldap://localhost:389",
"bindOn":"cn=admin,dc=example, dc=com",
"bindCredentials":"*****",
"searchBase":"ou=admin,dc=example,dc=com",
}"
}
}'
While starting composer-rest-server with authentication its showing error
SyntaxError: Unexpected token
in JSON at position 210
at JSON.parse (<anonymous>)
at Promise.then (/home/mfgteg/.nvm/versions/node/v8.9.3/lib/node_modules/composer-rest-server/server/server.js:127:34)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
I got the correct format. Thanks to a link I found on IBM site
I used the following configuration :
export COMPOSER_PROVIDERS='{
"ldap": {
"provider": "ldap",
"authScheme": "ldap",
"module": "passport-ldapauth",
"authPath": "/auth/ldap",
"successRedirect": "/",
"failureRedirect": "/",
"server": {
"url": "ldap://localhost:389",
"bindDn": "cn=admin,dc=example, dc=com",
"bindCredentials": "*****",
"searchBase": "ou=admin,dc=example,dc=com"
}
}
}'
However I am yet to figure out what to mention in "callbackURL".
Use the variable provided below. Just change the successRedirect and credentials as per your configuration. Also in case you are running Client application on some other machine, you may need to change the localhost in url to your machine IP address.
Note : I have tested this with open LDAP configured.
COMPOSER_PROVIDERS='{
"ldap": {
"provider": "ldap",
"authScheme": "ldap",
"module": "passport-ldapauth",
"authPath": "/auth/ldap",
"successRedirect": "Where you want to redirect",
"failureRedirect": "/ldap",
"session": true,
"json": true,
"LdapAttributeForLogin": "cn",
"LdapAttributeForUsername": "cn",
"server": {
"url": "ldap://localhost:389",
"bindDN": "cn=admin,dc=hsc,dc=com",
"bindCredentials": "xxxxx",
"searchBase": "ou=users,dc=hsc,dc=com",
"searchFilter": "(cn={{username}})"
}
}
}'

Standalone Service Fabric - AWS - FileStoreService - Copy-ServiceFabricApplicationPackage Fails

I have a 3 node standalone windows service fabric setup in AWS. The TestConfiguration and CreateCluster scripts run successfully, however on attempting to deploy any applications into the cluster I get the following error from powershell.
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath .\pkg\<packagename> -ImageStoreConnectionString fabric:ImageStore
Copy-ServiceFabricApplicationPackage : An error occurred during this operation. Please check the trace logs for more
details.
At line:1 char:1
+ Copy-ServiceFabricApplicationPackage -ApplicationPackagePath .\pkg\ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Copy-ServiceFabricApplicationPackage], FabricException
+ FullyQualifiedErrorId : CopyApplicationPackageErrorId,Microsoft.ServiceFabric.Powershell.CopyApplicationPackage
Not sure which trace logs would be useful in diagnosing the error, however checking the windows event log on one of the nodes I see the following errors, all for the FileStoreService.
ImpersonateAndCopyFile for SourcePath:\\<ipaddress>\StoreShare_Node3\131601795137630192\6.0.232.9494_0\131601794828730764_8589934592_1.ClusterManifest.xml, DestinationPath:C:\ProgramData\SF\Node1\Fabric\work\Applications\__FabricSystem_App4294967295\work\Store\131601795317314061\6.0.232.9494_0\131601794828730764_8589934592_1.ClusterManifest.xml failed: 0x8007052e. Have tried all access tokens.
CopyFile: SourcePath:\\<ip address>\StoreShare_Node3\131601795137630192\6.0.232.9494_0\131601794828730764_8589934592_1.ClusterManifest.xml, DestinationPath:C:\ProgramData\SF\Node1\Fabric\work\Applications\__FabricSystem_App4294967295\work\Store\131601795317314061\6.0.232.9494_0\131601794828730764_8589934592_1.ClusterManifest.xml, Error:0x8007052e, ElapsedTime:80
CopyFile: no new token is found. current token count: 2
Any ideas what this could be? I have recreated a new cluster with no security, firewall has all ports opened both in AWS and on the node machines (trying to remove all things that could be blocking the copying). Within AWS am using SimpleAD so all nodes are running with the same AD administrator, and can communicate to create the cluster.
Below is the cluster config I'm using, kept it as simple as I could to try to limit the causes of the problems.
Any help with diagnosing the copy file issues, or even pointing me at the relevant trace logs would be great.
Additionally I notice the ImageStoreService is showing warnings within Service Fabric Explorer
Unhealthy event: SourceId='System.FM', Property='State', HealthState='Warning', ConsiderWarningAsError=false.
Partition reconfiguration is taking longer than expected.
ImageStoreService 3 3 00000000-0000-0000-0000-000000003000
P/P Ready Node3 131601795137630192
S/S InBuild Node1 131601795317314061
S/S InBuild Node2 131601795317314062
(Showing 3 out of 3 replicas. Total available replicas: 1)
EDIT
Additional Information
On investigating the problem more I ran the Copy-ServiceFabricApplicationPackage with -Debug flag and it now gives the below error, suggesting the user name or password being used to either upload the package from my computer into the cluster, or for the cluster to distribute node to node is incorrect. I presume for node to node it is using the local accounts it creates ending in fffff for which I don't know why it would be creating invalid user credentials. If its between the computer uploading the package and the cluster, then currently I'm running with no security turned on, so don't know why this would be an issue?? Any help much appreciated.
Copy-ServiceFabricApplicationPackage -ApplicationPackagePath ..\pkg\Release -ImageStoreConnectionString fabric:imagestore -Debug
VERBOSE: System.Fabric.FabricException: An error occurred during this operation. Please check the trace logs for more details. ---> System.Runtime.InteropServices.COMException: The user name or password is incorrect. (Exception from HRESULT: 0x8007052E)
Thanks
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "08-2017",
"nodes": [
{
"nodeName": "Node1",
"iPAddress": "<node 1 internal ip address>",
"nodeTypeRef": "StandardNodeType",
"faultDomain": "fd:/0",
"upgradeDomain": "UD0"
},
{
"nodeName": "Node2",
"iPAddress": "<node 2 internal ip address>",
"nodeTypeRef": "StandardNodeType",
"faultDomain": "fd:/1",
"upgradeDomain": "UD1"
},
{
"nodeName": "Node3",
"iPAddress": "<node 3 internal ip address>",
"nodeTypeRef": "StandardNodeType",
"faultDomain": "fd:/2",
"upgradeDomain": "UD2"
}
],
"properties": {
"diagnosticsStore": {
"metadata": "Please replace the diagnostics store with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "StandardNodeType",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20000",
"endPort": "30000"
},
"ephemeralPorts": {
"startPort": "49152",
"endPort": "65534"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
],
"addOnFeatures": [
"DnsService",
"RepairManager"
]
}
}
After more investigating, I discovered it was due to not correctly enabling File Sharing on the windows boxes. Although shown as enabled within the Properties of the Network Adaptor. I failed to realise the settings needed to be enabled under the Advanced Sharing Centre Options (Control Panel\Network and Internet\Network and Sharing Center\Advanced sharing settings).

Asterisk REST ARI snoop (cURL)

I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}