What is the difference between a DICOM viewer vs a PACS system or RIS - imaging

I have installed OsiriX is my computer which is an open source DICOM viewer. I was wondering what would make this different from a PACS (Picture archiving and communication system) or RIS (Radiology Information System). How could OsiriX be incorporated into one of them?
Just trying to get a better understanding of this 3 apparently different but interrelated concepts.

The viewer is an application that allows you to look at images. The PACS is a server that stores the images (they often have an accompanying viewer). The RIS is a workflow/record system for scheduling the time for the scanners, managing appointments, storing reports, etc.

OsiriX have both options means - Storage and viewer basically called Osirix was PACS but stand alone. Every PACS has its own viewer now a days either server client basis or web viewer. Dicom work station was different and PACS viewer was different.
RIS was Radiology information System which contains Process of Radiology dept. Scan orders > scheduling > Generating Modality work list > scan reports > Modality reports > MIS reports > Billing (Optional) > Radiology reporting > Reports dispatch > Sending reports to PACS etc.
IT will communicate with PACS through HL7. PACS and RIS should be two different servers but some are available both in same then it wont suitable for bigger workflows.

Related

Retrieve historical data from the Citrix Director

We would like to have a list of all users that used a specific Citrix published desktop. Apparently this can be done within the Citrix Director tool but we would like to have this automated with PowerShell.
The following Citrix CmdLet exposes the data we need but only for the last 48 hours:
Get-BrokerConnectionLog
How would it be possible to retreive this data for the last 2 months for example?
If you have the Platinum / Premium edition you can set the retention periods for different data.
Look at Get-MonitorConfiguration to see the retention time and Set-MonitorConfiguration to set what you need.
https://www.citrix.com/blogs/2016/12/16/extended-monitoring-data-retention-for-enterprise-customers-with-citrix-director-7-12/

What is the difference between File System and File Management in an operating system?

I’ve found about file management explanations and file system explanations and “file system is part of file management” explanations. But I am wondering if they are the same or two different things? Because I cannot seem to find an article about them.
A modern Operating System, to be portable, must be file system independent. i.e.: It should not matter what type of storage format a given media device contains. At the same time, a media device must contain a specific type of storage format to contain files and folders, and at the same time be Operating System independent.
For example, an OS should be able to handle any file locally, allowing the actual transfer of these files from physical media to the OS (and visa-versa) to be managed by the file system manager. Therefore, an OS can be completely independent of how the file was stored on the media.
With this in mind, there are at least two layers, usually more, of management between the file being viewed and the file on the physical media. Here is a (simple) list of layers that might be used from top down.
OS App viewing the file
OS File Manager
OS File System Manager (allowing multiple file systems)
Specific File System Driver
Media Device Driver
When a call to read a file is made, the app (1) calls the OS File manager (2), which in turn--due to the opening of the file--calls the correct OS File System Manager (3), which then calls the Specific File System Driver (4), which then calls the Media Device Driver (5) for the actual access.
Please note that any or all could have a working cache manager which means calls are processed and returned without calling lower layers. e.g.: each read more than requested in anticipation of another read.
By having multiple layers like this, you can have any (physical) file system and/or media device you wish and the OS would be none the wiser. All you need is a media driver for the specific physical device and a file system manager for the physical format of the contents of the media. As long as these layers all support the common service calls, any format of media and content on that media will be allowed by the OS.

SAP Fiori Performance Testing

I need to do load-testing for SAP Fiori using HP load runner.
For SAP GUI Vusers to run need large number of GDI's in LG machine to overcome this. Terminal Server concept is brought in to picture this is fine for testing large number of Vusers on LG machine.
Here my doubt is, for SAP Fiori(html5 0r sapgui5) Vusers. Is it requires large number of GDI's(Graphical Device Interface) for testing Vusers on LG Machine again terminal server concept should brought back to solve?
Regards & Thanks,
Venkateshs.

Livecode and Biopac Interface

My lab recently purchased the BIOPAC system to measure skin conductance as part of our experiments.
My experiments are normally coded using Livecode.
I need to be able to tell Livecode to automatically score and label digital event marks in the skin conductance responses in the BIOPAC System.
Does anyone have any experience interfacing the BIOPAC system and Livecode? Does anyone have same code they have used?
Thanks!
There is a gadget from "Bonig und KallenBach":
http://www.bkohg.com/serviceusbplus_e.html
This has analog inputs that you could easily configure to measure skin resistance. It comes with a framework for LiveCode, and connects through the USB port.
I have used these in many applications to connect the real world to my computer. All your processing is done in the usual LC way.
I think, there is no direct Example using Biopac Hardware.
To tinkering with your own software outside AcqKnowledge, you have to purchase BHAPI (MPDEV.dll) from Biopac. The problem is BHAPI only support Windows. not MacOS nor Linux.
Another way is streaming data through AcqKnowledge 5.x. Start acquisition in AcqKnowledge, and you can stream it. Then receive the data stream in livecode and process it.

Need advice: How to share a potentially large report to remote users?

I am asking for advice on possibly better solutions for the part of the project I'm working on. I'll first give some background and then my current thoughts.
Background
Our clients can use my company's products to generate potentially large data sets for use in their industry. When the data sets are generated, the clients will file a processing request to us.
We want to send the clients a summary email which contains some statistical charts as well as sampling points from the data sets so they can do some initial quality control work. If the data sets are of bad quality, they don't need to file any request.
One problem is that the charts and sampling points can be potentially too large to be sent in an email. The charts and the sampling points we want to include in the emails are pictures. Although we can use low-quality format such as JPEG to save space, we cannot control how many data sets would be included in the summary email, so the total size could still exceed the normal email size limit.
In terms of technologies, we are mainly developing in Python on Ubuntu 14.04.
Goals of the Solution
In general, we want to present a report-like thing to the clients to do some initial QA. The report may contains external links but does not need to be very interactive. In other words, a static report should be fine.
We want to reduce the steps or things that our clients must do to read the report. For example, if the report can be just an email, the user only needs to 1). log in and 2). open the email. If they use a client software, they may skip 1). and just open and begin to read.
We also want to minimize the burden of maintaining extra user accounts for both us and our clients. For example, if the solution requires us to register a new user account, this solution is, although still acceptable, not ranked very high.
Security is important because our clients don't want their reports to be read by unauthorized third parties.
We want the process automated. We want the solution to provide programming interface so that we can automate the report sending/sharing process.
Performance is NOT a critical issue. Our user base is not large. I think at most in hundreds. They also don't generate data that frequently, at most once a week. We don't need real-time response. Even a delay of a few hours is still acceptable.
My Current Thoughts of Solution
Possible solution #1: In-house web service. I can set up a server machine and develop our own web service. We put the report into our database and the clients can then query via the Internet.
Possible solution #2: Amazon Web Service. AWS is quite mature but I'm not sure if they could be expensive because so far we just wanna share a report with our remote clients which doesn't look like a big deal to use AWS.
Possible solution #3: Google Drive. I know Google Drive provides API to do uploading and sharing programmatically, but I think we need to register a dedicated Google account to use that.
Any better solutions??
You could possibly use AWS S3 and Cloudfront. Files can easily be loaded into S3 using the AWS SDK's and API. You can then use the API to generate secure links to the files that can only be opened for a specific time and optionally from a specific IP.
Files on S3 can also be automatically cleaned up after a specific time if needed using lifecycle rules.
Storage and transfer prices are fairly cheap with AWS and remember that the S3 storage cost indicated is by the month so if you only have an object loaded for a few days then you only pay for a few days.
S3: http://aws.amazon.com/s3/pricing
Cloudfront: https://aws.amazon.com/cloudfront/pricing/
Here's a list of the SDK's for AWS:
https://aws.amazon.com/tools/#sdk
Or you can use their command line tools for Windows batch or powershell scripting:
https://aws.amazon.com/tools/#cli
Here's some info on how the private content urls are created:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I will suggest to built this service using mix of your #1 and #2 options. You can do the processing and for transferring the data leverage AWS S3 which is quiet cheap.
Example: 100GB costs like approx $3.
Also AWS S3 will be beneficial as you are covered for any disaster on your local environment your data will be safe in S3.
For security you can leverage data encryption and signed URLS in AWS S3.