500 Internal Server Error - Azure Function through Data Factory using Postgres DB - postgresql

I am trying to edit someone else's azure function app. When I run theirs, it works fine and connects to their DB successfully. When I try and change the connection string to my DB it gives me the error
HTTP response code: 500 Internal Server Error
without any other information.
Even if I just change the one line of code which defines the DB connection, it doesn't work. I have tried it on my local machine and it works, it just doesn't work in Azure functions.
Their original code (which works):
postgresql = os.environ.get('POSTGRES_SQL')
cnxn = psycopg2.connect(postgresql)
vs. mine (which doesn't work):
postgresql = 'postgresql://sqladmin:{my-password}#{db-connection-string}?sslmode=require'
cnxn = psycopg2.connect(postgresql)
I am also not sure where their DB connection comes from, using ".get('POSTGRES_SQL')", as they don't pass that parameter in anywhere. Below is how they call the function in Azure Data Factory, note that no parameters are passed in (nor anywhere in the function):
Even I try just a bare bones block of code as seen below, it gives me the same error.
def main(req: func.HttpRequest) -> func.HttpResponse:
postgresql = 'postgresql://sqladmin:{my-password}#{db-connection-string}?sslmode=require'
cnxn = psycopg2.connect(postgresql)
cursor = cnxn.cursor()
cursor.execute("CREATE TABLE staging.test (some-column varchar(100) null)")
cnxn.commit
cursor.close()
return func.HttpResponse(f"This HTTP triggered function executed successfully.")
Please let me know what I'm missing, or if you need any other info. I have already tried all other responses to similar StackOverflow questions.

Related

Calling GCP Translate API within Dataproc pyspark map

I am trying to call the language detection method of the translate client api from pyspark for each row in a file.
I created a map method as the following but the job seems to just freeze with no error. If I remove the call to the translate API it executes fine. Is it possible to call Google client API methods within pySpark map ?
mapping method to do translation
def doTranslate(data):
translate_client = translate.Client()
# Get the message information
messageId = data[0]
messageContent = data[6]
detectedLang = translate_client.detect_language(messageContent)
r = []
r.append(detectedLang)
return r
Figured it out!! your question led me in the right direction. thanks!
Turns out I was getting an exception from the call because I was going past the default quota for sizes of messages. I added a try/except block and determined this was the problem. Then cutting the message size down (I am just testing so dont want to mess with the quota) fixed the issue.

How to rollback a postgresql connection when using cherrypy

I use cherrypy to run an api and use thread_data to attach a postgresql cursor to each thread.
def connect_pg(thread_index):
cherrypy.thread_data.pgdb = connect(**cherrypy.config['pgargs'])
dict_cur = cherrypy.thread_data.pgdb.cursor(cursor_factory=psycopg2.extras.DictCursor)
dict_cur.close()
I then create a cursor using
cherrypy.thread_data.pgdb.cursor(cursor_factory=psycopg2.extras.DictCursor)
Occasionally a bad request may be made, which results in the error message
InternalError: current transaction is aborted, commands ignored until end of transaction block
The standard solution for this is to do a rollback (http://initd.org/psycopg/docs/faq.html).
However, rollback is a connection method and not a cursor method.
What are some good ways to deal with this error when using cherrpy's thread_data?
I used autocommit within cherrypy and that has done the job nicely (How do I properly use psycopg2 with cherrypy?). I now create the cursor as follows:
def connect_pg(thread_index):
cherrypy.thread_data.pgdb = connect(**cherrypy.config['pgargs'])
cherrypy.thread_data.pgdb.autocommit=True
dict_cur = cherrypy.thread_data.pgdb.cursor(cursor_factory=psycopg2.extras.DictCursor)
dict_cur.close()

Crystal reports - server and database

Im using visual studio 2010 + Sql Server 2008.
Im trying to show my reports using CR.. well when i try to use the system in my local machine, everything is ok.
I use store procedures to create reports.
The issue appears when i deploy the system in another PC.. A message appears asking for:
Server: // RETRIEVES ORIGINAL Server(Local)// Its not Correct i need to get Client Server
Database: // RETRIEVES ORIGINAL DB(Local)// Its not Correct i need to get Client DB
Username: I don't use any user , what user ?
Password: I don't use any password, what password?
i saw another solutions, but i can't find what's the data that i must use in Username or password. i use Windows autenthication to login to sql..
Thanks.
Regards.
Edit.. that's my code.. i can't use parameters, i don't receive any error. but system dont recognize the parameter that i send...
Dim NuevoReporte As New CReportNotaPorUsuario
Dim contenido As String
Dim ReportPath As String = My.Application.Info.DirectoryPath & "\CReportNotaPorUsuario.rpt"
Dim ConexionCR As New CrystalDecisions.Shared.ConnectionInfo()
contenido = Servicios.Funciones_Auxiliares.LeerArchivo(My.Application.Info.DirectoryPath & "\configuracion.txt")
ConexionCR.ServerName = Servicios.Funciones_Auxiliares.TextoEntreMarcas(contenido, "<server>", "</server>")
ConexionCR.DatabaseName = Servicios.Funciones_Auxiliares.TextoEntreMarcas(contenido, "<catalog>", "</catalog>")
ConexionCR.IntegratedSecurity = True
CrystalReportViewer1.ReportSource = ReportPath
'NuevoReporte.SetParameterValue("#cod_usuario", cbousuario.SelectedValue)
Dim field1 As ParameterField = Me.CrystalReportViewer1.ParameterFieldInfo(0)
Dim val1 As New ParameterDiscreteValue()
val1.Value = cbousuario.SelectedValue
field1.CurrentValues.Add(val1)
SetDBLogonForReport(ConexionCR)
It appears that you have separate servers and databases between the development and production environment. You need to make sure when you deploy your VS solution that the production server and database get referenced, not the development server and database.
There are some tutorials out there that can help you find a way to achieve this. Check out:
http://msdn.microsoft.com/en-us/library/dd193254(v=vs.100).aspx
Visual Studio 2010 Database Project Deploy to Different Environments
http://www.asp.net/web-forms/tutorials/deployment/advanced-enterprise-web-deployment/customizing-database-deployments-for-multiple-environments
EDIT: This seems to have evolved into a different issue than originally stated in the question. To dynamically get the connection string for CR from the text file, you will have to read teh text file first and put server name and database name into variables. Reading a text file, you can use something like string text = File.ReadAllText(#"C:\Folder\File.txt"); but you will need to extract server name and database name into variables. Then in order to use the variables in your connection string you use ConnectionInfo.Servername = variable1; and ConnectionInfo.DatabaseName = variable2.

SSRS ReportingService2010 change embedded DataSource to shared DataSource

I have SQL Server 2008 with SSRS installed on one server and SQL Server 2008 R2 with SSRS installed on a new server. I want to migrate 200+ reports as well as a few shared schedules and a couple data sources from the first server to the second one using the SSRS web service API. For simplicity sake since there are only a couple of shared data sources I went ahead and created those using the Report Manager interface.
Unfortunately, those who came before me embedded the data source information in each report (connection string, login, password, etc). I thought this would be a great time to change these to point to the shared data source so this would not have to be done for each report, one by one. I can create the reports on the new server just fine using CreateCatalogItem but I can't seem to determine how to properly go about changing from an embedded data source to a shared data source.
So far I have tried both SetItemReferences:
itemRef.Reference = "/Data Sources/TMS";
itemRef.Name = "TMS";
itemRef.Reference = "/Data Sources/TMS";
rs2010.SetItemReferences(catItem.Path, new ReportService2010.ItemReference[] { itemRef });
and SetItemDataSources:
ReportService2010.DataSourceReference dataSourceRef = new ReportService2010.DataSourceReference();
dataSourceRef.Reference = "/Data Sources/TMS";
ReportService2010.DataSource dataSource = new ReportService2010.DataSource();
dataSource.Name = "TMS";
dataSource.Item = dataSourceRef;
rs2010.SetItemDataSources(catItem.Path, new ReportService2010.DataSource[] { dataSource });
Both methods result in a "NotFoundException" when attempted on a report with an embedded data source but they both work just fine on reports that are already pointing to a shared data source.
Furthermore, I have searched all over Google as well as StackOverflow for a solution but have found nothing. Can anyone point me in the right direction here?
So I kept working with the SetItemReferences method and had a brilliant idea that ended up working. The final code I used is below:
List<ReportService2010.ItemReference> itemRefs = new List<ReportService2010.ItemReference>();
ReportService2010.DataSource[] itemDataSources = rs2010.GetItemDataSources(catItem.Path);
foreach (ReportService2010.DataSource itemDataSource in itemDataSources)
{
ReportService2010.ItemReference itemRef = new ReportService2010.ItemReference();
itemRef.Name = itemDataSource.Name;
itemRef.Reference = "/Data Sources/TMS";
itemRefs.Add(itemRef);
}
rs2010.SetItemReferences(catItem.Path, itemRefs.ToArray());
The problem was that I was not using the same DataSource name as what was found in the report .rdl file. I was able to determine what the name should be using the GetItemDataSources method. Since this method returns an array and may have more than one item in said array, I looped through it to create multiple ItemReferences if more than one existed though I doubt that happens very often if at all.

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list