Google text-to-speech - Google::Cloud::InternalError (13:Internal error encountered.): - google-text-to-speech

I have Google Text to Speech up and running in my application. Most of the time the API works perfectly, and I'm receiving audio file responses that play fine.
Sometimes though I receive the following error:
Google::Cloud::InternalError (13:Internal error encountered.):
I have safeguards in place to prevent my app from running into usage quotas so I don't think it's that. Also, before I had these safeguards in place, if I did go over quotas the error messages said you were over your quota.
Does anyone know what this message means?
Alternatively, if someone knows a good way to handle this error gracefully (it's a Rails app).
Thanks

Ok so I don't know what exactly is going wrong on Google's side other than some sort of internal error. However, I did come up with a solution to rescue the error and allows me to continue my text to speech job.
Here is what my code looks like for those interested:
def convert_to_audio(text, gender)
client = Google::Cloud::TextToSpeech.text_to_speech
input_text = { text: text }
# Note: the voice can also be specified by name.
# Names of voices can be retrieved with client.list_voices
# https://cloud.google.com/text-to-speech/docs/voices
if gender == 'MALE'
name = 'en-US-Standard-D'
else
name = 'en-US-Standard-E'
end
voice = {
language_code: "en-US",
name: name,
ssml_gender: gender
}
audio_config = { audio_encoding: "MP3" }
begin
retries ||= 0
response = client.synthesize_speech(
input: input_text,
voice: voice,
audio_config: audio_config
)
rescue Google::Cloud::InternalError
puts "The Google error occurred"
retry if (retries += 1) < 3
end
Basically now when I get that error from Google I retry the synthesize speech call.
Google has pretty tight quotas set on this API, and I'm guessing this is because larger and more frequent requests tend to throw errors more often, so they're trying to do quality control.
I did also find this error mapping for those interested:
namespace error {
// These values must match error codes defined in google/rpc/code.proto.
enum Code {
OK = 0,
CANCELLED = 1,
UNKNOWN = 2,
INVALID_ARGUMENT = 3,
DEADLINE_EXCEEDED = 4,
NOT_FOUND = 5,
ALREADY_EXISTS = 6,
PERMISSION_DENIED = 7,
UNAUTHENTICATED = 16,
RESOURCE_EXHAUSTED = 8,
FAILED_PRECONDITION = 9,
ABORTED = 10,
OUT_OF_RANGE = 11,
UNIMPLEMENTED = 12,
INTERNAL = 13,
UNAVAILABLE = 14,
DATA_LOSS = 15,
};
} // namespace error

Related

Solana metaplex auction fails in ValidateSafetyDepositBoxV2 instruction with "Supplied an invalid creator index to empty payment account"

I'm porting metaplex auction-house to Flutter mobile.
When creating auction with instant sale price of 0.1 wrapped sol, I have encountered the following error at the stage of ValidateSafetyDepositBoxV2 instruction.
The error was "Supplied an invalid creator index to empty payment account" and there is only one point where this message can be printed is Rust's process_empty_payment_account().
The most weird thing is that process_empty_payment_account function is called only from EmptyPaymentAccount instruction and my program didn't call it.
any idea what's happening?
Actual error log:
I/flutter ( 2718): {accounts: null, err: {InstructionError: [0, {Custom: 63}]}, logs: [Program p1exdMJcjVao65QdewkaZRUnU6VPSXhus9n2GzWfh98 invoke [1], Program log: Instruction: Validate Safety Deposit Box V2, Program log: Supplied an invalid creator index to empty payment account, Program p1exdMJcjVao65QdewkaZRUnU6VPSXhus9n2GzWfh98 consumed 11849 of 200000 compute units, Program p1exdMJcjVao65QdewkaZRUnU6VPSXhus9n2GzWfh98 failed: custom program error: 0x3f], unitsConsumed: 0}
I found the reason why that error was given after deploying a new program with some logs to the rust program. It was that I passed the wrong value for the metadata's address as the 4th account.
pub fn process_validate_safety_deposit_box_v2<'a>(
program_id: &'a Pubkey,
accounts: &'a [AccountInfo<'a>],
safety_deposit_config: SafetyDepositConfig,
) -> ProgramResult {
let account_info_iter = &mut accounts.iter();
let safety_deposit_config_info = next_account_info(account_info_iter)?;
let auction_token_tracker_info = next_account_info(account_info_iter)?;
let mut auction_manager_info = next_account_info(account_info_iter)?;
*let metadata_info = next_account_info(account_info_iter)?;*
....
So, the program failed at Metadata::try_from_slice_checked and returns error of InvalidCreatorIndex at the following code.
impl Metadata {
pub fn from_account_info(a: &AccountInfo) -> Result<Metadata, ProgramError> {
let md: Metadata =
try_from_slice_checked(&a.data.borrow_mut(), Key::MetadataV1, MAX_METADATA_LEN)?;
Ok(md)
}
}
It's a pity that the code didn't give a more elaborate error.

Azure Mobile Services for Xamarin Forms - Conflict Resolution

I'm supporting a production Xamarin Forms app with offline sync feature implemented using Azure Mobile Services.
We have a lot of production issues related to users losing data or general instability that goes away if the reinstall the app. After having a look through, I think the issues are around how the conflict resolution is handled in the app.
For every entity that tries to sync we handle MobileServicePushFailedException and then traverse through the errors returned and take action.
catch (MobileServicePushFailedException ex)
{
foreach (var error in ex.PushResult.Errors) // These are MobileServiceTableOpearationErrors
{
var status = error.Status; // HttpStatus code returned
// Take Action based on this status
// If its 409 or 412, we go in to conflict resolving and tries to decide whether the client or server version wins
}
}
The conflict resolving seems too custom to me and I'm checking to see whether there are general guidelines.
For example, we seem to be getting empty values for 'CreatedAt' & 'UpdatedAt' timestamps for local and server versions of the entities returned, which is weird.
var serverItem = error.Result;
var clientItem = error.Item;
// sometimes serverItem.UpdatedAt or clientItem.UpdatedAt is NULL. Since we use these 2 fields to determine who wins, we are stumped here
If anyone can point me to some guideline or sample code on how these conflicts should be generally handled using information from the MobileServiceTableOperationError, that will be highly appreciated
I came across the following code snippet from the following doc.
// Simple error/conflict handling.
if (syncErrors != null)
{
foreach (var error in syncErrors)
{
if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)
{
//Update failed, reverting to server's copy.
await error.CancelAndUpdateItemAsync(error.Result);
}
else
{
// Discard local change.
await error.CancelAndDiscardItemAsync();
}
Debug.WriteLine(#"Error executing sync operation. Item: {0} ({1}). Operation discarded.",
error.TableName, error.Item["id"]);
}
}
Surfacing conflicts to the UI I found in this doc
private async Task ResolveConflict(TodoItem localItem, TodoItem serverItem)
{
//Ask user to choose the resolution between versions
MessageDialog msgDialog = new MessageDialog(
String.Format("Server Text: \"{0}\" \nLocal Text: \"{1}\"\n",
serverItem.Text, localItem.Text),
"CONFLICT DETECTED - Select a resolution:");
UICommand localBtn = new UICommand("Commit Local Text");
UICommand ServerBtn = new UICommand("Leave Server Text");
msgDialog.Commands.Add(localBtn);
msgDialog.Commands.Add(ServerBtn);
localBtn.Invoked = async (IUICommand command) =>
{
// To resolve the conflict, update the version of the item being committed. Otherwise, you will keep
// catching a MobileServicePreConditionFailedException.
localItem.Version = serverItem.Version;
// Updating recursively here just in case another change happened while the user was making a decision
UpdateToDoItem(localItem);
};
ServerBtn.Invoked = async (IUICommand command) =>
{
RefreshTodoItems();
};
await msgDialog.ShowAsync();
}
I hope this helps provide some direction. Although the Azure Mobile docs have been deprecated, the SDK hasn't changed and should still be relevant. If this doesn't help, let me know what you're using for a backend store.

CakePHP email timeout & PHP maximum execution time

My application uses an Exchange SMTP server for sending emails. At present we don't have any message queuing in our architecture and emails are sent as part of the HTTP request/response cycle.
Occasionally the Exchange server has issues and times out while the email is sending, and as a result the email doesn't send. Sometimes, Cake recognizes the time out and throws an exception. The application code can catch the exception and report to the user that something went wrong.
However, on other occasions PHP hits its maximum execution time before Cake can throw the exception and so the user just gets an error 500 with no useful information as to what happened.
In an effort to combat this, I overwrote CakeEmail::send() in a custom class CustomEmail (extending CakeEmail) as follows:
public function send($content = null)
{
//get PHP and email timeout values
$phpTimeout = ini_get("max_execution_time");
//if $this->_transportClass is debug, just invoke parent::send() and return
if (!$this->_transportClass instanceof SmtpTransport) {
return parent::send($content);
}
$cfg = $this->_transportClass->config();
$emailTimeout = isset($cfg["timeout"]) && $cfg["timeout"] ? $cfg["timeout"] : 30;
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
//send email
$send = parent::send($content);
//reset PHP timeout to previous value
set_time_limit($phpTimeout);
return $send;
}
However, this isn't alwayus successful and I have had a few instances of this:
Fatal Error: Maximum execution time of 31 seconds exceeded in [C:\path\app\Vendor\pear-pear.cakephp.org\CakePHP\Cake\Network\CakeSocket.php, line 303]
CakeSocket.php line 303 is the $buffer = fread()... line from this CakeSocket::read():
public function read($length = 1024) {
if (!$this->connected) {
if (!$this->connect()) {
return false;
}
}
if (!feof($this->connection)) {
$buffer = fread($this->connection, $length);
$info = stream_get_meta_data($this->connection);
if ($info['timed_out']) {
$this->setLastError(E_WARNING, __d('cake_dev', 'Connection timed out'));
return false;
}
return $buffer;
}
return false;
}
Any ideas?
The problem lies here:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
In some places in my code I was increasing max time out to more than 30 seconds, but the email timeout was still only 30. This code reverted the PHP timeout to 31 seconds, and I'm guessing other stuff was happening before the email started to send which caused issues.
Fixed code:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout && $phpTimeout <= $emailTimeout) {
set_time_limit($emailTimeout + 1);
}

Raising events in KRL without using explicit

I'm writing an app that raises events, similar to how Phil Windley's personal data manager application works. However, if I try to use any event domain but explicit, the events don't get propagated. The following rules work fine with explicit as the domain, but not with driverreg.
rule driver_info_submit {
select when web pageview ".*"
pre {
driver_name = "Joe Driver";
driver_phone = "111-555-1212";
msg = <<
Current driver info: #{ent:driver_name}, #{ent:driver_phone}
>>;
}
notify("Started", msg);
fired {
raise explicit event new_driver_data with driver_name=driver_name and driver_phone=driver_phone;
}
}
// Save driver name
rule save_driver_name {
select when explicit new_driver_data
pre {
driver_name = event:param("driver_name") || ent:driver_name;
driver_phone = event:param("driver_phone") || ent:driver_phone;
}
noop();
always {
set ent:driver_name driver_name;
set ent:driver_phone driver_phone;
raise explicit event driver_data_updated;
}
}
rule driver_info_updated {
select when explicit driver_data_updated
{
notify("Driver name", ent:driver_name);
notify("Driver phone", ent:driver_phone);
}
}
It doesn't seem to be a problem with whether the app is deployed, as I've tried it both ways. What am I missing?
Only certain domains are allowed as domains in the raise statement:
explicit
http
system
notification
error
pds
This may be relaxed in the future.
This is covered in the documents here: https://kynetxdoc.atlassian.net/wiki/display/docs/Raising+Explicit+Events+in+the+Postlude
(note that this is a temporary home for the documentation)

Error loading a persisted workflow

I have a workflow started and persisted using messaging activities.
The correlation between the Start initial command and the Stop final command works well if they're sent within few seconds.
Problems begin when the workflow is unloaded, because the following Stop message throws the following FaultException:
If LoadWorkflowByInstanceKeyCommand.AssociateLookupKeyToInstanceId is not specified, the LookupInstanceKey must already be associated to an instance, or the LoadWorkflowByInstanceKeyCommand will fail. For this reason, it is invalid to also specify the LookupInstanceKey in the InstanceKeysToAssociate collection if AssociateLookupKeyToInstanceId isn't set
Can anybody help me?
The variables inside the workflow are of types int and XDocument.
This is the code to initialize the WorkflowServiceHost:
WorkflowServiceHost serviceHost = new WorkflowServiceHost(myWorkflow, new Uri(serviceUri));
ServiceDebugBehavior debug = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
{
debug = new ServiceDebugBehavior();
serviceHost.Description.Behaviors.Add(debug);
}
debug.IncludeExceptionDetailInFaults = true;
WorkflowIdleBehavior idle = serviceHost.Description.Behaviors.Find<WorkflowIdleBehavior>();
if (idle == null)
{
idle = new WorkflowIdleBehavior();
serviceHost.Description.Behaviors.Add(idle);
}
idle.TimeToPersist = TimeSpan.FromSeconds(2);
idle.TimeToUnload = TimeSpan.FromSeconds(10);
var behavior = new SqlWorkflowInstanceStoreBehavior
{
ConnectionString = ConfigurationManager.ConnectionStrings["WorkflowPersistence"].ConnectionString,
InstanceEncodingOption = InstanceEncodingOption.None,
InstanceCompletionAction = InstanceCompletionAction.DeleteAll,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
HostLockRenewalPeriod = new TimeSpan(00, 00, 30),
RunnableInstancesDetectionPeriod = new TimeSpan(00, 00, 05)
};
serviceHost.Description.Behaviors.Add(behavior);
serviceHost.Open();
Looking at the database, it seems that the workflow is never suspended.
Any help appreciated,
thank you
Not really sure what is going on here but it sounds like there are types used in the workflow that cannot be serialized and prevent the workflow from being stored to disk. When you say "Looking at the database, it seems that the workflow is never suspended." do you really mean suspended? And why do you expect the workflow to be suspended?
What happens if you send just the start message to the workflow and wait 2 seconds? Do you get a new record in the persistence database?