I have integrated dcm4chee-2.18.3-mysql with X-RAY, CT Scan and MRI machine. From X-RAY and CT Scan machine .dcm images are received in my dcm4chee-2.18.3-mysql but to access images from MRI finding issues while receiving images.
09:36:32,284 INFO [FsmImpl] received AAssociateRQ
appCtxName: 1.2.840.10008.3.1.1.1/DICOM Application Context Name
implClass: 1.3.46.670589.11.0.0.51.4.53.1
implVersion: Philips MR 53.1
calledAET: DCM4CHEE_SCP
callingAET: INGENIA
maxPDULen: 32768
asyncOpsWindow:
pc-1: as=1.2.840.10008.1.1/Verification SOP Class
ts=1.2.840.10008.1.2.4.70/JPEG Lossless, Non- Hierarchical, Firs
t-Order Prediction (Process 14 [Selection Value 1])
ts=1.2.840.10008.1.2.4.90/JPEG 2000 Lossless Image Compression
ts=1.2.840.10008.1.2.1/Explicit VR Little Endian
ts=1.2.840.10008.1.2.2/Explicit VR Big Endian
ts=1.2.840.10008.1.2/Implicit VR Little Endian
pc-3: as=1.2.840.10008.5.1.4.1.1.2/CT Image Storage
ts=1.2.840.10008.1.2.4.70/JPEG Lossless, Non- Hierarchical, Firs
t-Order Prediction (Process 14 [Selection Value 1])
ts=1.2.840.10008.1.2.4.90/JPEG 2000 Lossless Image Compression
ts=1.2.840.10008.1.2.1/Explicit VR Little Endian
ts=1.2.840.10008.1.2.2/Explicit VR Big Endian
ts=1.2.840.10008.1.2/Implicit VR Little Endian
pc-5: as=1.2.840.10008.5.1.4.1.1.4/MR Image Storage
ts=1.2.840.10008.1.2.4.70/JPEG Lossless, Non- Hierarchical, Firs
Related
There are many common file formats, for example jpeg images.
Suppose a jpeg image exists on two systems, one using big endian and the other small endian.
Will the saveed jpeg files look different?
In other words, if we have the images saved in a contiguous area of memory say starting from byte 0, will the images be the same?
I have 300+ TIFF images ranging in size from 600MB to 4GB; these are medical images converted from MRXS file format.
I want to downsize and make copies of the images at 512 x 445-pixel dimensions, as most of the files are currently sized at 5 figure dimensions (86K x 75K pixels). I need to downsize the files so that I can extract features from the images for a classification related problem.
I am using an i5-3470 CPU, Windows 10 Pro machine, with 20 GB RAM and a 4TB external HDD which holds the files.
I've tried a couple of GUI and command-based applications like XnConvert, Total Image Converter, but each GUI or CMD function causes the application to freeze up.
Is downsizing such large files even feasible using my hardware, or must I try a different command/BAT approach?
I'm trying to do target recognition using the target acoustic signal. I tested my code in matlab, however, i'm trying to simulate that in C to test it in tinyOS using sensor simulator.
In matlab, i used wav records (16 bits per sample, 44.1 sample rate), so for example, i have a record for a certain object, lets say cat sound which of 0:01 duration, in matlab that will give me a total of 36864 samples of type int16 ,and size 73728 bytes.
In sensor, if i have [Mica2 sensor: 10 bits ADC (but i'll use 8 bits ADC), 8 MHz microprocessor, and 4 Kb RAM. This means that when i detect an object, i'll fill the buffer with 4000 samples of type uint8_t (if i used 8 KHz sample rate and 8 bits ADC).
So, my question is that:
In matlab i used a large number of samples to represent the target audio signal(36864 samples), but in the sensor i'm limited to only 4000 samples, would that be enough to record the whole target sound?
Thank you very much, highly appreciate your advice
Which one produces the smallest 9-bit depth grayscale images using LibPNG?
16 bit Grayscale
8 bit GrayScale with alpha, having the 9th bit stored as alpha
Any other suggestion?
Also, from documentation it looks like in 8bit GRAY_ALPHA, alpha is 8 bit as well. Is it possible to have 8 bits of gray with only one bit of alpha?
If all 256 possible gray levels are present (or are potentially present), you'll have to use 16-bit G8A8 pixels. But if one or more gray levels is not present, you can use that spare level for transparency, and use 8-bit indexed pixels or grayscale plus a tRNS chunk to identify the transparent value.
Libpng doesn't provide a way of checking for whether a spare level is available or not, so you have to do it in your application. ImageMagick, for example, does that for you:
$ pngcheck -v rgba32.png im_opt.png
File: rgba32.png (178 bytes)
chunk IHDR at offset 0x0000c, length 13
64 x 64 image, 32-bit RGB+alpha, non-interlaced
chunk IDAT at offset 0x00025, length 121
zlib: deflated, 32K window, maximum compression
chunk IEND at offset 0x000aa, length 0
$ magick rgba32.png im_optimized.png
$ pngcheck -v im_optimized.png
File: im_optimized.png (260 bytes)
chunk IHDR at offset 0x0000c, length 13
64 x 64 image, 8-bit grayscale, non-interlaced
chunk tRNS at offset 0x00025, length 2
gray = 0x00ff
chunk IDAT at offset 0x00033, length 189
zlib: deflated, 8K window, maximum compression
chunk IEND at offset 0x000fc, length 0
There is no G8A1 format defined in the PNG specification. But the alpha channel, being all 0's or 255's, compresses very well, so it's nothing to worry about. Note that in this test case (a simple white-to-black gradient), the 32-bit RGBA file is actually smaller than the "optimized" 8-bit grayscale+tRNS
Which one produces the smallest 9-bit depth grayscale images using LibPNG?
16 bit Grayscale
8 bit GrayScale with alpha, having the 9th bit stored as alpha
The byte raw layout of both formats is very similar: G2-G1 G2-G1 ... in one case (most significant byte of each 16-bit value first), G-A G-A ... in the other. Because the filtering/prediction is done at the byte level, this means that little or no difference is to be expected between two alternatives. Because 16 bit Grayscale is in your scenario more natural, I'd opt for it.
If you go the other route, I'd suggest to experiment putting the most significant bit or the least significant bit on the alpha channel.
Also, from documentation it looks like in 8bit GRAY_ALPHA, alpha is 8 bit as well. Is it possible to have 8 bits of gray with only one bit of alpha
No. But 1 bit of alpha would mean totally opaque/totally transparent, hence you could opt for add a TRNS chunk to declare a special color as totally transparent (as pointed out in other answer, this would disallow the use of that colour)
I've implemented a PIC32 as a USB sound card, using USB Audio Class 1. I'm sending a sawtooth signal from the microcontroller to the PC(windows 7, 64 bit), as 16-bit samples:
in decimal:
000
800
1600
2400
.. so on
then i try recording the received audio using Audacity, with MME -driver, as .wav or .raw.
I use MATLAB to open and inspect the data, and there i see data like:
000
799
1599
2400
..
The distortion varies from -1 to +1 bit pr sample..
Anyone have any idea where the problem might be.?
Windows-audio drivers.?
Since you receive the audio signal on PC, playback it, and record it using SW, the audio signal is converted from digital to analog, and to digital again. These introduce quantization error and noise, and you see the little difference between two signals.
I solved my problem..
The problem was caused by the application i used to record the data, and the method i used.. I used Audacity, which supports the old windows MME audio API, and the DirectSound API. These are relatively high-level API's apparently, and are the cause of the distortion.
About the Windows Core Audio APIs
Instead i used another program, called Reaper, it has an option to record using ASIO og WASAPI. This solves my problem. I've checked every sample in an 2 hour .wav file, using MATLAB, and it is completely bit-perfect.
I was probably some quantization error, but it was caused by the API.
ASIO and WASAPI gave me bit-perfect sound, MME and DirectSound gave me a distorted signal.