I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.
Related
I am getting the below error whilst running my Python Azure Function on the local machine in VSCode.
For clarification the message is:
The listener for function 'Functions.IoT_Data-Handler' was unable to
start. Microsoft.Azure.EventHubs.Processor: Encountered error while
fetching the list of EventHub PartitionIds. System.Private.CoreLib: A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond.
This error has never occurred before in the time I have started using VSCode for Azure functions (since last September). The only thing that has changed recently is that I now deploy this function within an Azure Function premium resource, but really that should not matter in the dev environment.
For information, this function is hooked up to an Azure IoT-Hub endpoint and is simply reading and processing the uplink data before saving it to an Azure SQL database.
Can anyone offer any advice?
Check if my below findings help to fix your issue:
As #PeterBons said, check the connection string given correctly in the local.settings.json:
Whatever the Event Hub Endpoint/IoT Hub Endpoint Connection String given in the file local.settings.json, that property name should be mapped in the function.json file.
Try replacing the IoT Hub Connection String without the consumer group name as mentioned in this GitHub Issue #5512
I found similar issues in the SO 1 & 2 which will be helpful to fix your issue.
We are trying to reach to an API hosted in our company network using rest connector in ADF (SHIR is used). Linked service connection is successful but dataset is unable to read the data and copy activity is as well failing with below error. Please suggest your thoughts in resolving the same.
Failure happened on 'Source' side. ErrorCode=UserErrorFailToReadFromRestResource,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred while sending the request.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.Http.HttpRequestException,Message=An error occurred while sending the request.,Source=mscorlib,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond ,Source=System,'
This error is mostly seen due to firewall issues. You might want to verify your network firewall setting to allow the API request to be read.
Also, verify if your API call is working as expected using other API testing tools. If the issue persists you can raise a support ticket for engineers to investigate more on the issue.
If you are able to preview data in your source , then check your sink connection as this issue can occur when the Sink in the copy activity is behind a firewall, I was getting the same issue and I tried copying to a container without a firewall and it worked. Its weird that the error is related to Source and the issue is with Sink.
Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:
This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.
Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.
We have implemented contract testing using message pact and directly accessing Kafka Topics for retrieving the messages from queues. Kafka topics can be accessed using authentication PLAINTEXT. So we have a separate LoginModule defined in a config file with username and password. When I do the test from consumer end it is picking up the correct config file and the scripts are running. But when I run pact:verify using the same setting in the script, LoginModule is not getting recognized and I get an error "unable to find LoginModule class". From pact side I am getting an error "Failed to invoke provider method". Have anyone faced such issues with using pact with kafka before please ?
Are you talking about this one? github.com/reevoo/pact-messages If so, we are not currently supporting pact-messages as we have yet to finalize the base level tech with http/json.
This has been brought up in the past and is known within the Foundation, but we'd rather lock down the core technology before trying to tackle other message protocols/formats.
I am sending a soap request using soapUI to fetch data from oracle 10g db. Soap UI successfully displays response when the db fetches results within 30 seconds.
But the real problem is when the db response exceeds 30 seconds, soapUI displays following error message :
Fault occurred while processing.
I have tried the below 3 scenarios :
1) Increased the socket timeout to 1200000
2) Increased the timeout values in tomcat server config file (/conf/server.xml)
3) Checked for any Null Pointer exception and found none.
Please help me to get success message in soapUI. Thanks in advance.
There are many components between the SoapUI adapter and the database engine. Most of these will have a configurable timeout.
The listener on the database server will pass the query to the database engine - and the database engine itself will have some protection against long-running queries. It's quite likely that the database is killing queries that run over 30 seconds.
You can prove this by capturing a query from your application and trying the same query directly in the database administration tool. This will tell you why the query fails (if it fails.)
JDBC calls a component listening on the database server - this flow itself will have a timeout, which you can set at the JDBC level somewhere in your environment.