MySQL Workbench Results - mysql-workbench

How do disable the Result in workbench?
The reason I am asking is that after certain number of results displayed, MySQL pops up with a message "The maximum result is reached for results. Do you want to continue?" something along those line.

You don't disable results or result sets as these are the meat of any database client tool. A result is what you get when you query your database. Disabling that would mean you don't do anything meaningful anymore with the database.
The error you get, however, is shown because you got too many results to display. Each query, which returns a result set, creates a tab in the result area. Creating too many tabs no only makes no sense, since you cannot really see them, but also consumes a lot of system resources. So there's this sanity check in MySQL Workbench to avoid creating too many.
Solution: make sure your scripts/queries don't return more result sets than what you have set in your settings:
or increase that amount via preferences -> SQL Editor -> Query Editor -> Max number of result sets, but use a sane value there!

Related

Problem with connecting ADODB.Recordset to a forms RECORDSET on the On Open event of the form

I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.

Can I debug a PostgreSQL query sent from an external source, that I can't edit?

I see how to debug queries stored as Functions in the database. But my problem is with an external QGIS plugin that connects to my Postgres 10.4 via network and does a complex query and calculations, and stores the results back into PostGIS tables:
FOR r IN c LOOP
SELECT
(1 - ST_LineLocatePoint(path.geom, ST_Intersection(r.geom, path.geom))) * ST_Length(path.geom)
INTO
station
(continues ...)
When it errors, it just returns that line number as the failing location, but no clue where it was in the loop through hundreds of features. (And any features it has processed are not stored to the output tables when it fails.) I totally don't know enough about the plugin and about SQL to hack the external query, and I suspect if it was a reasonable task the plugin author would have included more revealing debug messages.
So is there some way I could use pgAdmin4 (or anything) from the server side to watch the query process? Even being able to see if it fails the first time through the loop or later would help immensely. Knowing the loop count at failure would point me to the exact problem feature. Being able to see "station" or "r.geom" would make it even easier.
Perfectly fine if the process is miserably slow or interferes with other queries, I'm the only user on this server.
This is not actually a way to watch the RiverGIS query in action, but it is the best I have found. It extracts the failing ST_Intersects() call from the RiverGIS code and runs it under your control, where you can display any clues you want.
When you're totally mystified where the RiverGIS problem might be, run this SQL query:
SELECT
xs."XsecID" AS "XsecID",
xs."ReachID" AS "ReachID",
xs."Station" AS "Station",
xs."RiverCode" AS "RiverCode",
xs."ReachCode" AS "ReachCode",
ST_Intersection(xs.geom, riv.geom) AS "Fraction"
FROM
"<your project name>"."StreamCenterlines" AS riv,
"<your project name>"."XSCutLines" AS xs
WHERE
ST_Intersects(xs.geom, riv.geom)
ORDER BY xs."ReachID" ASC, xs."Station" DESC
Obviously replace <your project name> with the QGIS project name.
Also works for the BankLines step if you replace "StreamCenterlines" with "BankLines". Probably could be adapted to other situations where ST_Intersects() fails without a clue.
You'll get a listing with shorter geometry strings for good cross sections and double-length strings for bad ones. Probably need to widen your display column a lot to see this.
Works for me in pgAdmn4, or in QGIS3 -> Database -> DB Manager -> (click the wrench icon). You could select only bad lines, but I find the background info helpful.

Most Performant way to implement time-dependent status

Central to a project I'm working on is a highlighting-mechanic that can be applied to certain items on the website. The idea is, that this highlighted-status is only active for a certain amount of time.
I'm trying to find the most performant way to achieve this (in querying, setting status, checking status and revoking it)
A first approach would be to set simply set a value 'highlighted:true' to the item. This seems to be the most performant way to query for highlighted items. The Drawback I see here, is that there also needs to be stored a date for the highlighting-action, but furthermore there needs to run an interval to check on the highlighted items and potentially revoke their highlighted status. Also the exact moment when the item stops beeing highlighted can't be determined exactly, since its depending on the interval of the check-function.
A second approach would be to mainly store the date of the highlighting-action and run the query against it. It seems that the query of highlighted objects is way less performant, since every item ever is beeing checked, and on top its not just a boolean, but a proper function that throws those differnt date-values around to check if it is still valid. On the upside there is no external cleanup-function neccessary and every highlighting period ends perfectly on time.
Would love to have your input on this. Is there maybe a clever pattern on this?

Data for some bound fields not appearing on report

I'm creating a 'SectionReport' (Active Reports V.9), and am dragging bound fields from the 'Report Explorer' directly onto the report. I've written the SQL query, which executes and returns the correct results as expected.
However, only a small portion of the bound fields are displaying data from the table.
Again, when I execute this in the 'Query Designer', I see all data. When I save that exact query, only some of the data is populated on the report.
This process seems so straight forward and yet I seem to be missing something.
Please help and thanks in advance!
So you're saying the data is returned but not showing up in some of the fields? Have you checked the binding for each of those fields?
Can you step with the debugger and verify that in the detail_format even there is data in report.fields for those records that have missing values.
can you add debug statements that print each field value in the detail_format event.
based on what you described all I can offer is some diagnostics techniques. If you'd like please send your report to our support team and we can properly inspect it and help you out http://arhelp.grapecity.com/
If you are using the enduserdesigner sample or the pre-compiled exe that comes with the installer, can you please make sure that the PreviewPages to a larger number instead of 10.
You can change the value by follwing these steps. in report exploer, double click the settings node. go to gloabal settings tab and change the previewpages value.
If this is not the case, please attach the report. There are no other settings that will cause this.

Preserve everything count and get filtered results in t-sql?

I have created a complex sql server 2008/coldfusion search page, that searches thru a variety of tables.
On the left is a list of the categories, plus an everything category, by each category or type of result is a total number of results of that type found in the current search result.
I have everything fine, but I am hoping there is a more optimal approach.
Because everytime i filter the search to a specific category, i still have to get all the results, so as to make sure the everything category has the correct totals.
And because of this, I have realized this is a problem I've had in lots of other programs in coldfusion/sql.
Where you want to reduce the number of results by some field in the select, but you need to keep the original recordcount total.
But you really don't want to re-run the whole massive query everytime, when you just need to get the trimmed results.
This program is 1 cfc, 1 cfm, 1 stored procedure, and jquery/ajax inside the cfm to call the cfc.
The cfm calls the cfc when it originally get's a form submitted search request, and then any filtering does the same thing.
However if there are more than 20 results then it show's a button at the bottom to do via ajax get 20 more records.
My main goal is to improve performance, make sure i keep an accurate record of what the record count is before any filtering is done, without having to rerun the unfiltered query every time.
This is a kind of complex problem, so there might not be any answers...
Thank you all for trying..
I would run the "big" query once, then pop it into a SESSION variable. Then I'd use Query-of-Query to return subsets based on filters.
The main query always exists, so you can query against that or use metadata like bigQuery.recordCount. Your QofQ is a smaller set of data you can use for display. And you can re-apply filters without having to return to the database.
Well you need to run the query (or a count(*)) at least once to get the total number. You could:
Cache this query and refer to the
cached query's recordcount again
and again
Store the record count in the session scope until the next time it is run for this user