Ms Access, unquoting Strings for use in SQL - postgresql

I am using Ms Access as GUI and I am connectiong to PostgreSQL over ADO. I like to prevent SQL-Injection over user input.
I know there are prameterized Queries, but I don't get them to work so far. Anyway, my question is:
Is there a build in function to quote out user input or do I need to write my own function?

There is no built-in function for this, so would have to roll your own.
That said, save that time and read up on queries and parameters in ADO. It is not that difficult - no magic - and many good tutorials are to be found for the browsing.

Related

Is it possible to query the CKAN datastore with comparisons other than exact matching?

Is it possible to pass any of the comparison operators listed here https://www.postgresql.org/docs/9.1/static/functions-comparison.html to the datastore_search api?
I'm aware of the datastore_search_sql function, but it seems like pretty bad practice to be passing sql queries directly from the frontend.
I'm afraid datastore_search only does =. See https://github.com/ckan/ckan/blob/master/ckanext/datastore/backend/postgres.py#L341 This API call is designed for simple filtering and sorting - mirroring the controls in the resource preview widget.
I'm not clear about your situation - frontend sending SQL queries. But I don't see much different between using datastore_search_sql and datastore_search. They are both relatively simple wrappers around Postgres SQL.

CTE vs TVF Performance

Which performs better: common table expressions or table value functions? Im designing a process that I could use either and am unable to find any real data either way. Whatever route I choose would be executed via a SP and the data would ultimately update a table connected through a linked server (unfortunately there is no way around this). Insights appreciated.
This isn't really a performance question. You are comparing tuna fish and watermelons. A cte is an inline view that can be used by the next query only. A TVF is a complete unit of work that can function on it's own, unlike a cte. They both have their place and when used correctly are incredibly powerful tools.

DATASTAGE capabilities

I'm a Linux programmer.
I used to write code in order to get things done: java perl php c.
I need to start working with DATA STAGE.
All I see is that DATA STAGE is working on table/csv style data and doing it line by line.
I want to know if DATA STAGE can work on file that are not table/csv like. can it load
data into data structures and run function on them, or is it limited to working
only on one line at a time.
thank you for any information that you can give on the capabilities of DATA SATGE
IBM (formerly Ascential) DataStage is an ETL platform that, indeed, works on data sets by applying various transformations.
This does not necessarily mean that you are constrained on applying only single line transformations (you can also aggregate, join, split etc). Also, DataStage has it's own programming language - BASIC - that allows you to modify the design of your jobs as needed.
Lastly, you are still free to call external scripts from within DataStage (either using the DSExecute function, Before Job property, After Job property or the Command stage).
Please check the IBM Information Center for a comprehensive documentation on BASIC Programming.
You could also check the DSXchange forums for DataStage specific topics.
Yes it can, as Razvan said you can join, aggregate, split. It can uses loops and external scripts, it can also handles XML.
My advice for you is that if you have large quantities of data you're gonna have to work on then datastage is your friend, else if the data that you're going to have to load is not very big then it's going to be easier to use JAVA, c, or any programming language that you know.
You can all times of functions , conversions , manipulate the data. mainly Datastage is used for ease of use when you handling humongous data from datamart /datawarehouse.
The main process of datastage would be ETL - Extraction Transformation Loading.
If a programmer uses 100 lines of code to connect to some database here we can do it with one click.
Anything can be done here even c , c++ coding in a rountine activity.
If you are talking about hierarchical files, like XML or JSON, the answer is yes.
If you are talking about complex files, such as are produced by COBOL, the answer is yes.
All using in-built functionality (e.g. Hierarchical Data stage, Complex Flat File stage). Review the DataStage palette to find other examples.

Query log equivalent for Progress/OpenEdge

Short story: A report running against a Progress database (OpenEdge Release 10.1C03) takes hours to complete. I suspect that it does not take advantage of existing data indexes. Would like to understand how it scans the data to then try to add an index that will make it run faster.
Source code of the report is not available. The code is native Progress 4GL, not SQL.
If it were an SQL database I would try to do a dump of SQL queries and would then go from that. With 4GL I did not find any such functionality. Is it possible to somehow peek at what gets executed at the low level?
What else can be done if there is no source code?
Thanks!
There are several things you can do:
If I recall correctly 10.1C should have the _usertablestat and _userindexstat virtual system tables available. These allow you to observe, at runtime, what tables and indexes are being accessed by a particular session. You can either write your own 4GL program to query them or you can use the screens in PROMON, R&D, 3 "Other Displays", 5 "I/O Operations by User by Table" and 6 "I/O Operations by User by Index". That will show you what tables and indexes are actually in use and how much use they are getting. If the observed data seems wrong it will probably give you a clue. (If the VSTs are missing it might be because the db was upgraded from an older version -- add them with proutil dbname -C updatevsts.)
You could also use the session startup parameters -clientlog "filename" and -logentrytypes QryInfo to obtain more detailed information about the queries being executed.
Keep in mind that Progress is not SQL. Unlike most SQL databases the 4gl uses a static, compile-time, optimizer. Index selection happens when the code is compiled. So unless you can recompile (and you seem to not have source so that seems unlikely) you won't be able to improve things by adding missing indexes. You might, however, at least be able to show the person who does have source where the problem is.
Another tool that can help is the profiler. This will identify where in the code the time is being spent. That can also be good information to provide to the original vendor if they need help finding the problem. For more information on the profiler: http://dbappraise.com/ppt/profiler.pptx

How to SELECT data from SQL Server using .NET 2.0 -- as simple as possible

Easy question:
I have an app that needs to make a half dozen SELECT requests to SQL Server 2005 and write the results to a flat file. That's it.
If I could use .NET 3.5, I'd create a LINQ-To-SQL model, write the LINQ expressions and be done in an hour. What is the next best approach given that I can't use .NET 3.0 or 3.5? Are ADO.NET DataReaders/DataSets the best option, or am I forgetting something else available?
Using the SqlCommand and SqlDataReader classes are your best bet. If you need to write the results to a flat file, you should use the reader directly instead of going to a DataSet, since the latter will load the result in memory before you're able to write it out to a flat file.
The SqlDataReader allows you to read out the data in a streaming fashion, making your app a lot more scalable for this situation.
As Nick K so helpfully answered on my SQL Server 2000 question on serverfault, the bcp utility is really handy for this.
You can write a batch file or quick script that call BCP with your queries and have it dump csv,sql direct to a text file!
Agree with Dave Van den Eynde's answer above, but I would say that if you're pushing a large amount of data into these files, and if your app is something that can support it, then it's worth taking a look at making an SSIS package.
Could be complete overkill for this, but it's something that is often overlooked for bulk import/export.
Alternatively, you could avoid writing code and use BCP.exe:
http://msdn.microsoft.com/en-us/library/ms162802(SQL.90).aspx