Crystal Reports constantly has open connections to database server - crystal-reports

I have Crystal Reports installed and working against Oracle server. When opening Toad and checking "Top Session Finder" I see that Crystal (cms.exe) constantly has open connections with the database.
Why does cms.exe create so much connections against the database? can it be limited? what is their purpose?
Can I set Crystal Server to open connection only when it needs them and close them when it's done using it?

The cms.exe process stands for Central Management Server, it is one of the core pieces of the Crystal Server platform.
Why does cms.exe create so much connections against the database? can
it be limited?
If you open the CMC (Central Management Console) and select Servers and open the properties page for the Central Management Server, you should see an option System Database Con­nections Requested.
The option's purpose, explained by the Administrator's Guide:
Specifies the number of CMS system database connections that the CMS
attempts to establish. If the server cannot es­tablish all of the
requested database connection, the CMS continues to function but at a
reduced performance, since fewer concurrent requests can be served
simultaneously. The CMS will attempt to establish additional
connections, until the requested number of connection is established.
what is their purpose?
Again, taken from the Administrator's Guide:
The CMS maintains security and configuration information, directs
service requests to servers, manages auditing, and maintains the CMS
system database.
In other words: your Crystal Server environment cannot function without the CMS. Shut it down and your whole environment stops working.
It always needs open connections to the database in order to serve requests quickly. While you can limit the number of connection, doing so might impact the performance of your Crystal Server environment.

Related

How to find connection leaks on PostgreSQL cloud sql

I’m using Postgres provisioned by Google Cloud SQL,
Recently we see the number of connections to increase by a lot.
Had to raise the limit from 200 to 500, then to 1000. In Google Cloud console Postgres reports 800 currenct connections.
However I have no idea where these connections come from. We have one app engine service, with not a lot of traffic at the moment accessing it, another application hosted on kubernetes. And a dozen or so batch jobs that connect to it. Clearly there must be some connection leakage somewhere.
Is there any way I can see from where these connections originate ?
All applications connecting to it are Java based at the moment.
They use the HikariCP connection pool. I’m considering changing the “test query”upon connection to insert a record in a log table. Hence I could perhaps find out from where the connections originate.
But are there better ways available?
Thanks,
Consider monitoring connection activity with pg_stat_activity, i.e: SELECT * from pg_stat_activity;.
As per the documentation:
Connections that show an IP address, such as 1.2.3.4, are connecting using IP. Connections with cloudsqlproxy~1.2.3.4 are using the Cloud SQL Proxy, or else they originated from App Engine. Connections from localhost are usually to a First Generation instance from App Engine, although that path is also used by some internal Cloud SQL processes.
Also, take a look at the best practices for managing database connections that contain information on opening and closing connections, connection count, or on how to set a connection duration in the Java programming language.

DataStage: run low level socket connection

I am Unix java developer trying to help a datastage developer, so out of my aquarium.
The datastage process connects to a database hosting financial transactions on a unix server. there is a datastage process for migrating financial transactions to the ACCOUNTING system. The ETL developers for one reason or another have specified they cannot run one or more specific ETL while in-taking new financial transactions and have specified the process that inserts transactions into the DB be stopped.
me java geek thinks have some process checking a service running at port 55555 would be perfect. But we cannot find a way for datastage to create a socket connection to a port to check. I don't do datastage so I don't know how to work around it's limitations.
The ETL developer thinks a cron script running every minute that inserts an up/down status for the process into a special table would be perfect. I think it is a waste of cpu.
I cannot be the only company that cannot run an ETL when some process is running on a remote system.
How did you solve this issue? Is there a way to connect to a remote servers socket and run the service from datastage???
thanks
after a bunch of discussion.
options we found
Add a step to the start stop server scripts that writes process status to a table. pro: easy to implement. con: not turely accurate (some geek like me is likely to bypass the start/stop script and run only build/run the executable bypassing the start stop script and bypassing the step that inserts the status.) No network and InfoSec paperwork
Cron based script that updates the table with the status on a minute by minute basis. what a pain!!!! No network and InfoSec paperwork.
A script made available to the network through inet or xinet. Problem is datastage ETL developer does not know how to connect to a socket via C or java program. Creates Infosec and network paperwork issues.
New web service (there is a tomcat server serving up a number of web services) Problem is datastage ETL developer does not know how to connect to a socket via C or java program. Creates Infosec and network paperwork issues.
options 3 and 4 are accurate and realtime. options 1 and 2 opens up the possibility of inaccuracies by bypassing process, but that opens up a different can of worms.
We are probably going to implement option 1

Postgres terminology: client vs connection

In Postgres, is there a one-to-one relationship between a client and a connection? In other word, is a client always one connection and no client can open more than one connection?
For example, when Postgres says:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
is that equivalent to "too many connections already"?
Also, as far as I understand, Postgres uses one process for each client. So does this mean that each process is used for one connection only?
Refer - https://www.postgresql.org/docs/9.6/static/connect-estab.html
PostgreSQL is implemented using a simple "process per user"
client/server model. In this model there is one client process
connected to exactly one server process. As we do not know ahead of
time how many connections will be made, we have to use a master
process that spawns a new server process every time a connection is
requested.
So yes, one server process serves one connection.
You can have as many connections from a single client (machine, application) as the server can manage. The server can support a given number of connections, whether or not these come from different clients (machine, application) is irrelevant to the server.
The connection is made to the postmaster process that is listening on the port that PG is configured to listen to (5432 by default). When a connection is established (after authentication), the server spawns a process which is used exclusively by a single client. That client can make multiple connections to the same server, for instance to connect to different databases, or the same database using different credentials, etc.

Meteor: How to develop multiple webservers with Reactive DOTW

I am currently looking at possible development models for a device that will be in a clients home. I need the device to run a local copy of Meteor while also being able to get and insert information from a central server in a secure/reactive way.
All sensitive information has not been included in this image
I am required to make a local server as I need to run shell commands on the device. While the device could make HTTP webhook calls, it would be slow due to packet travel time and does not meet requirements.
I know that the local server could connect to the Central Server mongodb which would be ideal, however as this local server is physically located in a clients house, this means that the mongodb password would be exposed (big security problem). Also I would be unable to control what information is sent to the local server. I was unable to find a way to subscribe to an external server, which would be a great solution.
Another way could be that the local server simply use HTTP requests, however another requirement is that Audit requests appear almost as soon as they are issued which is ideal for a reactive mongodb item. A heartbeat wouldn't really fit due to the data/processing overheads and slowness.
The summary the question is: How to make a device to run a local copy of Meteor while also being able to get and insert information from a central server in a secure/reactive way.
Well in the end, I found that you can use cross-server and even CORS connections with https://docs.meteor.com/api/connections.html
So now any aspiring developer can use the DDP framework.

Can create a remote server with MongoDB? How?

My question, to be more clear, it is to create a server with mongodb on a cloud hosting (for example) and access it through another server.
Example:
I have a mobile app.
I hosted my mongoDB a cloud hosting (ubuntu).
I want to connect my app to the db on the server cloud.
Is it possible? How?
I'm joining this learning and my question was exactly MongoDB to create a server in a way that I could access it remotely.
Out of "localhost"? Different from all the tutorials I've seen.
From what you are describing, I think you want to implement a 2-Tier-Architecture. For practically all use cases, don't do it!
It's definitely possible, yes. You can open up the MongoDB port in your firewall. Let's say your computer has a fixed IP or a fixed name like mymongo.example.com. You can then connect to mongodb://mymongo.example.com:27017 (if you use the default port). But beware:
Security You need to make sure that clients can only perform those operations that you want to allow, e.g. using MongoDB integrated authentication, otherwise some random script kiddie will steal you database, delete it, or fill it with random data. Many servers, even if they don't host a well-known service, get attacked thousands of times per day. Also, you probably want to encrypt the connection so people can't spy on the connection. And to make it all worse, you will have to store the database credentials in your client app, which is practically impossible to do in a truly secure way.
Software architecture There is a ton of arguments against this architecture, but 1) alone should be enough. You never want to couple your client to the database, be it because of data migrations, software updates, security considerations, etc.
3-Tier
So what to do instead? Use a 3-Tier-Architecture: Host a server of some kind on mymongo.example.com that then connects to the database. That server could be implemented in nginx/node.js, iis/asp.net, apache/php, or whatever. It could even be a plain old C application (like many game servers).
The mongodb can still reside on yet a different machine, but when you use a server, the database credentials are only known to the server, not to all the clients.
Yes, it is possible. You would connect to MongoDB using the ip address of your host, or preferably using it's fully qualified hostname rather than "localhost". If you do that, you should secure your MongoDB installation otherwise anyone would be able to connect to your MongoDB instance. At an absolute minimum, enable MongoDB authentication. You should read up on MongoDB Security.
For a mobile application, you would probably have some sort of application server in front of MongoDB, e.g. your mobile application would not be connecting to MongoDB directly. In that case only your application server would be connecting to MongoDB, and you would secure MongoDB accordingly.