I have created a SQL instance on Google Cloud and I need to change the timezone. I already seen documentation, and I added the flag default_time_zone and set the value to 06:00, but the console doesn't let me write the semicolon.
How can I write the value? Thanks in advance.
The proper format for the default_time_zone flag is +/-HH:MM e.g. to set it to GMT+6 you would write the value +06:00. Don't forget the leading zero.
To modify the timezone, update the Google Cloud SQL flag named default_time_zone. This or any other database flag can be updated as follows:
1) In the Google Cloud Platform Console, open an existing project by selecting the project name.
2) Open the instance and click Edit.
3) Scroll down to the Flags section.
4) To set a flag that has not been set on the instance before, click Add item, choose the flag from the drop-down menu, and set its value.
5) Click Save to save your changes.
6) Confirm your changes under Flags on the Overview page.
When you add or modify these flags, your instance will automatically restart. Note that you cannot modify flags on failover replicas.
For further reading, see the documentation for setting Cloud SQL Flags.
Related
I know how to add or remove a store with PowerShell using Microsoft.Office.Interop.Outlook, but I haven't found any information about changing values.
I read https://learn.microsoft.com/en-us/office/vba/api/outlook.namespace#methods but I don't see a method available for setting properties.
Context: User's PST files have been moved from one path to another. I'm trying to avoid disruption wherever possible, so I'm writing a PS script to move the PST files, and then update Outlook with the new path.
Since removing and re-adding the stores will break user-defined stuff like rules, I'm hoping for a way to change existing store filepaths that will require no user action.
Is this possible at all?
As a second option, can I pull the existing rules, and modify them (or recreate them)?
PST store entry id embeds the PST path in it (you can see it in OutlookSpy - I am its author - click IMessage / IMAPIFolder / IMsgStore button, select PR_STORE_ENTRYID, click "..." next to the Value edit box).
If a rule includes a store id (e.g. copy / move message action), you would need to reset / recreate the rule.
I you don't want to remove / add a store, can reset the store location using ProfMan library (I am also its author) directly in the profile section in the registry. See https://www.dimastr.com/redemption/profman_examples.htm#example2 for an example on how to read PST paths. You can modify the script to set the path instead.
I already followed these great instructions on how to dynamically create dialog node options from generic input and it's working like charm. But for now I cannot see how to hand over the chosen option value to the next node to process further. Is there any documentation how to pass the chosen option value to the child node?
You can store any selected option and other information in context variables. They are passed around and can be accessed in other nodes. The information is available until you unset or delete the context variable.
My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:
I am using MS UIAutomation in C++ to control a third party WPF application. I can read the value of an edit control (IUIAutomationElement objects). When I try to set the value with SetValue (IUIAutomationValuePattern objects) it does not return an error, but does not set the value of the edit control.
The manifest contains , the application is signed and is run from C:\Program Files.
I experienced that some UI Elements do not implement the UI Automation Provider correctly, as a result some patterns do simply not work as expected or even fail (although they are shown available).
To verify that the object is corrupt and not your code you could use the ValuePattern via Inspect.exe. Open Inspect -> select the control -> Action (Toolbar) -> ValuePattern.setValue
As a workaround I would suggest you to use SendKeys. If you need to focus the element first, yourAutomationElement.setFocus() is your friend. If setFocus does not work get the ClickablePoint/BoundingRectangle of the AutomationElement and use user32.dll in order to click the object.
I was moving a database to using Google Cloud SQL which previously had a max_allowed_packet of 20M.
Currently the Google Cloud SQL default for max_allowed_packet is 1M.
Is there any way to increase this variable to 20M? I have already tried the following:
set global max_allowed_packet = 20971520;
Which returns:
Error Code: 1621. SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value
and then:
set global max_allowed_packet = 20971520;
This returns the error:
Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation
Thank you in advance for your help!
To change your max_allowed_packet on Google Cloud SQL, go to the overview of your instance on the cloud console, click on edit and look for the MySQL Flags section at the bottom of the page. max_allowed_packet is one of the flags you can set there. Set the value you want, and save/confirm.
You can now set it yourself by editing the instance in Developer Console.
All the settable flags are documented here: https://cloud.google.com/sql/docs/mysql-flags
In my case I couldn't update the max_allowed_packet setting because I had a flag of sql_mode=TRADITIONAL which expects the value to be a multiple of 1024.