How can I set sql_mode to a list of values - google-cloud-sql

I am trying to use the 2nd gen cloud sql and would like to change the sql mode. In the UI, I can only set sql_mode to one value from a drop-down list, but not multiple of them (eg, "STRICT_MODE_TRANS,ALLOW_INVALID_DATES"). What would be the best way to accomplish that?
Cheers,
Andres

I know this post is 1 year old, but I stumbled upon this now when I had a problem with sql_mode when I tried migrating a database from MySQL 5.5 to Google SQL using 5.7. Though I know that we could SET GLOBAL sql_mode='' to any valid value we want, it took me hours to give up and concluded we could not set multiple values on Google Cloud SQL.
Google only allows one value to be set on sql_mode flag for now. If your problem pertains to removing ONLY_FULL_GROUP_BY (OP does not mention why he wants to customize values) without removing the rest of the values of sql_mode, using the value TRADITIONAL in the Console or gcloud sql instances patch <instance_name> --database-flags sql_mode=TRADITIONAL will remove that value from the rest of the string.
From MySQL 5.7 Documentation:
Before MySQL 5.7.4, and in MySQL 5.7.8 and later, TRADITIONAL is equivalent to STRICT_TRANS_TABLES, STRICT_ALL_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, and NO_ENGINE_SUBSTITUTION.
I would have only added this as a comment above, but I can't add one yet due to lacking points.

This is not supported right now by Google Cloud SQL. You can only set one value.

Another potential solution is to set the sql_mode to HIGH_NOT_PRECEDENCE
Once set in Cloud SQL the string for sql_mode will become:
HIGH_NOT_PRECEDENCE
All other flags are removed!
I was coming from an older project so this solution might not work for all, but seems to be working well for us, plus it's something that can be tried quickly.

Related

Why is REGEXP_MATCH not working like expected in Google Data Studio?

I try to generate better reports in Google Data Studio. So I started using custom fields with regular expressions, which do not work like expected.
Given for example a custom field city with the value "I love Berlin" in it i created this statement:
CASE
WHEN REGEXP_MATCH(city,".*Berlin.*") THEN "Berlin"
ELSE "Other"
END
My expected result would be a match with a returned "Berlin", but I get "Other" instead.
I tried a few different things without positive results.
The CASE statement provided in the question works as expected (edit: for future reference, this was a bug, that was specific to the PostgreSQL connector, and that since been resolved; see note below).
You could have a look at adding a (capturing group) as well as adding a case insensitive flag (?i) to see if that resolves the issue:
CASE
WHEN REGEXP_MATCH(city, ".*(?i)(Berlin).*") THEN "Berlin"
ELSE "Other"
END
Note: The 25 Mar 2021 update explicitly states that the issue was resolved for the PostgreSQL connector:
Improved text functions in PostgreSQL
We've fixed a bug that prevented the CONTAINS_TEXT, STARTS_WITH, ENDS_WITH, and REGEXP_MATCH functions from working correctly with the PostgreSQL connector.
Editable Google Data Studio Report (Embedded Google Sheets Data Source) and a GIF to elaborate:
It seems like Regexp functions can't be used with the Postgres live connector (maybe MySQL as well). I solved my problem using data extraction for the fields I needed to do regex stuff on and included the created diagrams to my report which is connected to the live connector as well.

Problem with connecting ADODB.Recordset to a forms RECORDSET on the On Open event of the form

I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.

Tableau 9.3: using a multi-option set as a filter across multiple data sources

First Post! am kinda feeling the pressure :)
I've created a multiple option filter for my dashboard using a set (with include all members). which is great for my sheets which use that datasource as primary, but when it's the secondary source I've hit a brick wall - I cannot see (or find any reference in searches) how to use the filter. Any calculated dimensions I've seen reference In/Out. Is there a way round this or something I'm missing?
thanks.
You can't do it when it's based on the secondary source in a data-blending scenario. Currently Tableau only allows for it to be applied to the primary. We are hoping this changes in version 10.0.

Solr AutoCommit not working with Postgresql

I am using Solr 4.10.0 with PostgreSql 9.3. I am able to configure my solr core properly using data-config.xml and search through the database different tables. However, I am not able to setup the autoCommit feature. Whenever any row gets added in the table, I expect them to start appearing in the results after the maxTime (1 minute) but that doesn't happen. I have to explicitly rebuild the index by doing a full data-import and then everything works fine.
My solrconfig.xml is:
<updateHandler class="solr.DirectUpdateHandler2">
<autoCommit>
<maxTime>60000</maxTime>
<openSearcher>true</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
</updateHandler>
Is there something extra needs to be done for using autoCommit here? I checked my log files as well but there is no error / exception. What am I missing?
Please find the below link...
SOLR: What does an autoSoftCommit maxtime of -1 mean?
I think this is what is happening in your case..
First off, you can see the expression ${solr.autoSoftCommit.maxTime:-1} within the tag. This allows you to make use of Solr's variable substitution. That feature is described in detail here in the reference. If that variable has not been substituted by any of those means -1 is taken as value for that configuration.
Turning commitMaxTime to -1 effectively turns autocommit off.

What can cause an inability to set QRYTIMLMT in DB2 from .NET?

We are using IBM's data provider from C# .NET 4.5 to query an i Series DB2 database. Normally this works very well, but for some queries, DB2 reports error "SQL0666 - SQL query exceeds specified time limit or storage limit".
I have tried setting the command timeout to 0, but to no effect. I have also tried to execute, in the manner explained here, the CHGQRYA command to set the QRYTIMLMT value to *NOMAX (or some other large value), but seemingly to no effect. However, if I use the same command to set the QRYSTGLMT (storage limit), it takes effect. Thus, I know that I am using the command correctly, and that it gets interpreted and executed by the database.
So, what can cause my inability to set the QRYTIMLMT value?
Also, our "DBA" has set the limit to *NOMAX on his end, and for queries not running through the .NET provider, everything works fine.
We're using IBM's Client Tools version 6r1 with service pack SI42423.
OK, so after lots of testing, I found the problem.
We're using the DeriveParameters() method to set the parameter types correctly, and if this method is called before setting CommandTimeout, the latter has no effect(!). The solution was to reverse the ordering of these statements.