Can I use multiple NLS_LANG in same sql loader? - special-characters

I have a situation where my Oracle sql loader has to support 4 character sets. The loader script is common and loads files from multiple country sources, hence the requirement.
At the moment, the value of a field coming from file is 1àáâäåèéêëîïñóòôöùûüÿßçtecseven, and after the loader loads this file in the table, the value is seen �������������tecseven
Now, I checked my env file and currently it has the following value set for parameter:
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
My question, how can I further add other NLS_LANG parameter values like
NLS_LANG=FRENCH_FRANCE.AL32UTF8, NLS_LANG=GERMAN_GERMANY.AL32UTF8 etc?
TIA.
how to upload special characters with sql loader

Related

External files in SAS DI

*I am facing one concern when I executed Job in SAS DI to populate records in .dsv file then I am getting columns headers in multiple line instead in a single line, after column 257 external file is breaking the column values in next line so please help me out to get resolution of this *
I have deleted the external file and re-created with a new template and even tried to enhance the logical length in File Parameter of External file but still on same page as I am new in SAS DI so not able to find the solution.

COBOL DB2 Bind Process

What is the difference between pre-compile and bind for a COBOL DB2 program.
How does syntax check differ in both the processes.
If we give the wrong column name in our code, then in which process it will fail.
It seems you need to do some study in the Db2 Knowledge Centre.
A pre-compile action creates a bindfile, containing the static SQL present in the source code (i.e the sections of code with EXEC SQL statements in your COBOL), in addition to a compilable form of the source code that contains the non-SQL logic and data (your PROCEDURE DIVISION and DATA DIVISION etc).
A bind action uses both the bindfile and the database to create a package inside the database which is the executable form of the bindfile contents. The package contains sections corresponding your your EXEC SQL blocks for static SQL.
Later, when the built (i.e. compiled and linked) application executes, and wants to use the database, this will cause sections of the package to be loaded from the database catalog (or read from cache) and executed by the database manager to deliver the required actions.
As each command (precompile, vs bind) serves a different purpose, the syntax varies , and also can vary with the Db2-server platform (Z/OS , i-series, Linux/Unix/Windows) and version.
Refer to the free Db2 Knowledge Center for your version of Db2 and your Db2-server platform (separate different documentation Knowledge Center websites exist for Db2-for-Z/OS, Db2 for i-series, Db2-for-Linux/Unix/Windows ).

Is it possible to prevent the SQL Producer from overwriting just one of the tables columns?

Scenario: A computed property needs to available for RAW methods. The IsComputed property set in the model will not work as its value will not be available to RAW methods.
Attempted Solution: Create a computed column directly on the SQL table as opposed to setting the IsComputed property in the model. Specify that CodefluentEntities not overwrite the computed column. I would than expect the BOM to read the computed SQL field no differently than if it was a normal database field.
Problem: I can't figure out how to prevent Codefluent Entities from overwriting the computed column. I attempted to use the production flags as well as setting produce="false" for the property in the .cfp. Neither worked.
Question: Is it possible to prevent Codefluent Entities from overwriting my computed column and if so, how?
The solution youre looking for is here
You can execute whatever custom T-SQL scripts you like, the only premise is to give the script a specific name so the Producer knows when to execute it.
i.e. if you want your custom script to execute after the tables are generated, name your script
after_[ProjectName]_tables.
Save your custom t-sql file alongside the codefluent generated files and build the project.
In my specific case, i had to enable full-text index in one of my table columns, i wrote the SQL script for the functionality, saved it as
`after_[ProjectName]_relations_add`
Heres how they look in my file directory
file directory
Alternate Solution: An alternate solution is to execute the following the TSQL script after the SQL Producer finishes generating.
ALTER TABLE PunchCard DROP COLUMN PunchCard_CompanyCodeCalculated
GO
ALTER TABLE PunchCard
ADD PunchCard_CompanyCodeCalculated AS CASE
WHEN PunchCard_CompanyCodeAdjusted IS NOT NULL THEN PunchCard_CompanyCodeAdjusted
ELSE PunchCard_CompanyCode
END
GO
Additional Configuration Needed to Make Solution Work: In order for this solution to work one must also configure the BOM so that it does not attempt to save the data associated with the computed columns. This can be done through Model using the advanced properties. In my case I selected the CompanyCodeCalculated property. Went to advanced settings. And set the Save setting to False.
Question: Somewhere in the Knowledge Center there is a passing reference on how to automate the execution SQL Scripts after the SQL Producer finishes but I can not find it. Anybody now how this is done?
Post Usage Comments: Just wanted to let people know I implemented this approach and am so far happy with the results.

SSIS: sql statement as parameter

I have a button on my site, where user clicks to export the data to excel file. The problem I am facing now is that the data gets too large (40+ mb) and the web throws time out error.
The web takes a parameter from a dropdown box, and then pass it to a stored procedure.
My solution to this is to dump the data on an excel file on a network drive instead of returning it directly to the user. The user will be notified, via msdb.dbo.sp_send_dbmail, once the file is ready.
I found articles on the internet showing how to pass parameter to the stored procedure within ssis, but not how to pass the whole sql statement to ssis.
I'm new to SSIS and would really appreciate a detail example.
Thank you!
I am not familiar with the web, so I'm not interested to making any change to the web at this moment.
With the ExecuteSQL Task you can put the SQL statement in a variable, or a separate file.
In your SQL Task, set the SQLSourceType to either Variable or File connection.
Then for the SQLStatement set it to the appropriate variable or file connection.

Where does the stored procedure Sys.xp_readerrorlog Read its contents from specifically?

I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.