AWS Device Farm File upload 5MB Limit - aws-device-farm

Callinig sendKeys to upload a Remote file to AWS Device Farm desktop
I'm getting this error:
1 validation error detected: Value at 'payload' failed to satisfy constraint: Member must have length less than or equal to 5000000

It seems like AWS Device Farm has a 5MB file limitation.
I could not find any documentation to support that.

Related

What do the various convertstore errors mean?

I am trying to convert a 2 tier symbol store into a 3-tier symbol store using the convertstore.exe tool as described by Microsoft.
However, I get error messages which do not tell me much. Depending on which store I want to convert I get the following errors:
Failed initial checks.
Failed to lock Symbol Store. Error 0x00000003.
ERROR: Couldn't create X:\...\index2.txt. Error 0x00000005.
Sometimes convertstore seems to run without error message, but it hasn't converted the store.
What do these error messages mean and how to mitigate them?
Failed initial checks.
Possible causes:
This error can happen if you run convertstore without any arguments.
Mitigation: Use the correct syntax convertstore.exe -s <store>
The symbol store is already a 3-tier store
Mitigation: none, if the symbol store is already a 3-tier store. The tool will only convert in one direction. It cannot convert back and forth.
Mitigation: If it isn't a 3-tier store, delete the file index2.txt.
Failed to lock Symbol Store. Error 0x00000003.
Possible causes:
The symbol store does not have a pingme.txt or 000Admin folder.
Mitigation: specify a symbol store, not an arbitrary folder that happens to contain some symbols.
Mitigation: create a zero byte file pingme.txt and an empty folder 000Admin.
Failed to move <pdb> > <pdb>. Error 0x00000005.
Possible causes:
The file is currently in use.
Mitigation: close other programs that may currently access the file, then delete index2.txt and run the command again.
You don't have write access to the symbol store.
Mitigation: use SysInternals Process Monitor to diagnose the issue. Note that convertstore will not use the drive letters of mapped network shares, but use the SMB share name instead.
Couldn't create index2.txt. Error 0x00000005.
Possible causes:
You don't have write access to the symbol store.
Mitigation: use SysInternals Process Monitor to diagnose the issue. Note that convertstore will not use the drive letters of mapped network shares, but use the SMB share name instead.
Failed to move <pdb> > <pdb>. Error 0x000000B7.
Possible causes:
The destination file already exists in the 3-tier part of the store. Someone worked on the symbol store in the meanwhile and downloaded new symbols, storing them in the 2-tier format. You now have them in two locations: a 2-tier folder and a 3-tier folder.
Mitigation: delete the 2-tier version manually.
No error message
Possible causes:
convertstore x64 version 10.0.22000.1 suffers from an access violation at convertstore!ConvertAdminFileW+0x1c9
Mitigation: submit the crash dump to Microsoft and hope that they will fix this. Then run the x86 (32 bit) version.

Wait time for Google Cloud service account role change to propagate

I am using a downloaded JSON file containing service account keys, instead of ADC, with code running on my local developer machine and communicating with live GCP Firestore.
After adding a service account to a role, in my case roles/datastore.user, do I have to do anything before it takes effect?
E.g. wait 15 minutes, redownload the JSON, restart some services, something else?
Question relates to this error in automated tests running on my machine.
Test method MyProject.Data.Repositories.FirestoreRepositoryTests.FirestoreAccountDocRepository_UpdateAsync__updates threw exception:
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Permission denied on resource project my-project-prodlike.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1642697226.430711000","description":"Error received from peer ipv4:172.217.169.74:443","file":"/Users/einari/Projects/grpc/grpc/src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"Permission denied on resource project my-project-prodlike.","grpc_status":7}")
Note - I'm using Contrib.Grpc.Core.M1 since I'm on new MacBook.
Note - I'm no longer using the above and now using Google's workaround GRPC lib adapter, just in case. See https://github.com/googleapis/google-cloud-dotnet/issues/7560#issuecomment-975414370.
The permission denied problem was being caused by an incorrect project name (and not permission actually being denied).
At the top of the Google Cloud Console is the name of the current project. However, that's actually just a pointless alias, the real project identifier is not displayed by default, though it is in the URL in the browser.
Of course, the error message implies it found its target resource and it denied access.
I'm so tired.

failed to open stream: Disk quota exceeded godaddy cpanel

enter image description here
how to increase this File Usage
Per their web hosting page (https://www.godaddy.com/hosting/web-hosting#compare) the most number of files you can have on any plan is 250,000. You either need to delete files, or find another host.

Google Cloud Client Library - load local file to cloud storage - cURL error 56:

I am using php Google Cloud Client library.
$bucket = $this->storage->bucket($bucketName);
$object = $bucket->upload(
fopen($localFilePath, 'r'),
$options
);
This statement, sometimes gave the following errors.
production.ERROR: cURL error 56: SSL read: error:00000000:lib(0):func(0):reason(0), errno 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) {"exception":"[object] (Google\Cloud\Exception\ServiceException(code: 0): cURL error 56: SSL read: error:00000000:lib(0):func(0):reason(0), errno 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) at /opt/processing/vendor/google/cloud/src/RequestWrapper.php:219)
[stacktrace]
But after I re-run the codes, the error is gone.
I had run the codes (data process) for more than a year, I rarely saw this error before. Now, I moved my codes to a new server. I started to see this error. (It might be that this error happened before, just my old setup is not ignore to catch and log these errors.)
Due to the error report is from Google Cloud (less than 5% error rate), and re-run the codes, the error disappears, I think the error cause is from Google Cloud Platform.
Does anyone see the same errors? Are there anything we can do to prevent this error? Or we just have to code our process to retry when this error pops up?
Thanks!
The error code you're getting (error 56) is defined as:
CURLE_RECV_ERROR (56)
Failure with receiving network data.
If you're getting this error it's likely you have a network issue that's causing your connections to break. Over the Internet you can expect to get this kind of error occasionally but rarely. If it's happening frequently there's probably something worse going on.
These types of network issues can be caused by a huge number of things but here's some possibilities:
Firewall or security software on your computer.
Network equipment (e.g. switches, routers, access points, firewalls, etc) or network equipment configuration.
An outage or intermittent connection between your ISP and Google (though it looks like Google wasn't detecting any outages recently).
When you're dealing with cloud storage providers (Google Storage, AWS S3, etc) you should always program in automatic retry logic for anything important. The Internet isn't always going to be perfectly reliable and it's best to plan for that in your code instead of relying on not having a problem.

SqlBase Database on One Drive

I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.