How to limit the -findall curl call result to certain number in FileMaker's curl call - filemaker

I know the -findall curl call parameter for FileMaker server will return all the rows in that specific database, however, for testing purpose, I just want to show for maybe, three.
I am aware sql has command of LIMIT, but how a curl command for FM is handling this same scenario?
In this case, -find is not an option for me.

I am assuming you are using curl to query a FileMaker server. If so, you are looking for the -max url parameter
From the Help PDF for FileMaker Server 13
–max (Maximum records) query parameter
Specifies the maximum number of records you want returned
Value is: A number, or use the value all to return all records. If –max is not specified, all records
are returned.

Related

Increase Filter Limit in Apache Superset

I am trying to create a filter for a field that contains over 5000 unique values. However, the filter's query is automatically setting a limit of 1000 rows, meaning that the majority of the values do not get displayed in the filter dropdown.
I updated the config.py file inside the 'anaconda3/lib/python3.7/site-packages' directory by increasing the DEFAULT_SQLLAB_LIMIT and QUERY_SEARCH_LIMIT to 6000, however this did not work.
Is there any other config that I need to update?
P.S - The code snippet below shows the json representation of the filter where the issue seems to be coming from.
"query": "SELECT casenumber AS casenumber\nFROM pa_permits_2019\nGROUP BY casenumber\nORDER BY COUNT(*) DESC\nLIMIT 1000\nOFFSET 0"
After using the grep command to find all files containing the text '1000', I found out the the filter limit can be configured through the filter_row_limit in viz.py

Informatica SQ returns different result

I am trying to pull data from DB2 via informatica, I have a SQ query that pulls few fields based on joins for 4 different tables.
When I run the query directly in the database, it returns the expected result, however when I run it in informatica and run a debugger, I see something else.
Please note all the columns data perfectly match, except one single column.
Weird thing is, this is a calculated field from the table based on a case statement:
CASE WHEN Column1='3' THEN 'N' ELSE 'Y' END.
Since this is a calculated field with a length of one string, I have connected from the source to SQ from one of the sources having 1 character length.
This returns 'Y' when executed in the database, the same query when I copy paste in SQ of information and run it, I get a data 'E', and this data can never be possible as I expect only a N or a Y. I have verified the column order, that its in the right place. This is very strange, is something going wrong because of the CASE Statement?
Save yourself the hassle, put an expression transformation after tge source qualifier and calculate, port value there then forget about it
I think i got the issue. We use Informatica PowerExchange to connect to a as400 system(DB2), and it seems that when we are trying to set a flag information in AS400, and pass it to informatica via PowerExchange, it converts it to binary, and to solve this, there needs to be an entry in the PowerExchange configuration file.
Unfortunately, i myself was not aware that it could be related to PowerExchange instead of powercenter itself.!!
Thanks for your assistance! Below is the KB about it.
https://kb.informatica.com/solution/4/Pages/17498.aspx

Executing the query using bq command line in Google Big Query

I execute a query using the below Python script and the table gets populated with 2,564,691 rows. When I run the same query using Google Big Query console, it returns 17,379,353 rows (query is as-is). I was wondering whether there is some issue with the below script. Not sure whether --replace in bq query replaces the past result set instead of appending to it.
Any help would be appreciated.
dateToday = (time.strftime("%Y/%m/%d"))
dateToday1 = dateToday.replace('/','')
commandStr = "type C:\Users\query.txt | bq query --allow_large_results --replace --destination_table table:dataset1_%s -n 1" %(dateToday1)
In the Web UI you can use Query History option to navigate to respective queries.
After you locate them - you can expand respective entries and see what exactly query was executed
I am more than sure that just comparing query texts you will see source of "discrepancy" right away!
added
In Query History - not only you can see Query Text, but also all configuration properties that were used for respective query - like Write Preference for example and others. So even if query text the same you can see potential difference in configuration that will give you a clue

PostgREST filter returns incorrect results when value contains a space

Using current version of PostgREST (v.0.3.2.0), attempting a very simple GET call:
Db contains two records with the following accountNames: "Account 1" and "Account #2".
This works:
GET localhost:3000/accounts?accountName=eq.Account 1
==> proper data is retrieved.
This does NOT work:
GET localhost:3000/accounts?accountName=eq.Account #2
==> NO data is retrieved. Obviously the # character prevents the filter from working properly.
Is there a way around this problem?
/accounts?accountName=eq.Account%20%232. Use Urlencoding.

When a sql returns more than one value, what value will be stored in the host variable

In COBOLDB2 program, what value will be stored in the host variable after getting -811 sqlcode. (i.e multiple rows returned by the query).
No data will be fetched to your host variable because SQLCODE < 0 means there is an error. Please refer to this link: IBM SQL Tutorial
You can use statements of this kind to retrieve single rows of data into host variables. The single row can have as many columns as desired. If a query produces more than one row of data, the database server cannot return any data. It returns an error code instead.
Indeed, no data can be fetched if you just use a query like that. In this case, you can use a CURSOR and the FETCH statement. This way, you can read into the host variable the returned lines one by one.
In short, this goes like this:
declare cursor curs_name for select .... from....where .....
open curs_name
fetch curs_name into host_var
close curs_name
All these instructions are enclosed between EXEC SQL ........... END EXEC. Of course, you have to fetch once for each line. You can check the SQLCODE to see if you reached the end of the cursor. You look for SQLCODE 100.