Transfer PixelBox data with boost - sockets

I get an image from an Ogre rendertaget.
I get the pixelbox of the image :
Ogre::RenderTarget *rt = _window;
rt->update();
int width = rt->getWidth();
int height = rt->getHeight();
std::cout << "width=" << width << std::endl;
std::cout << "height=" << height << std::endl;
uchar *data = new uchar[width * height * 3];
PixelBox pb(width, height, 1, PF_BYTE_RGB, data);
rt->copyContentsToMemory(pb);
After doing that, i want to get the pb.data (that's Ogre::uchar), write it in a buffer, and send it via a socket using boost. And don't see how to.
thanks.

Look at the http sync client for an example. The code you want is going to look like:
boost::asio::streambuf request;
std::ostream request_stream(&request);
request_stream << pb;
boost::asio::write(socket, request);

Related

Highlight a widget partially in GTK+

I have a list box in part of my interface, and I want to highlight the GtkListBoxRows individually by progress. You load several files, and my program works on it each file individually, and I want to highlight the list box row like a progress bar. It is very similar to a progress bar, just the contents inside are buttons and some text. Is there a specific Cairo/Pango function that allows the recoloring?
I have a solution here using Gtkmm (should be easily translatable to C). I have a series of 5 buttons aligned horizontally inside a container and a "Make progress" button. When you click on it, child buttons in the container are updated to show the progress:
#include <iostream>
#include <gtkmm.h>
class MainWindow : public Gtk::ApplicationWindow
{
public:
MainWindow();
private:
Gtk::Grid m_container;
Gtk::Button m_progressButton;
int m_progressTracker = 0;
};
MainWindow::MainWindow()
: m_progressButton("Make progress...")
{
// Add five buttons to the container (horizontally):
for(int index = 0; index < 5; ++index)
{
Gtk::Button* button = Gtk::make_managed<Gtk::Button>("B" + std::to_string(index));
m_container.attach(*button, index, 0, 1, 1);
}
// Add a button to control progress:
m_container.attach(m_progressButton, 0, 1, 5, 1);
// Add handler to the progress button.
m_progressButton.signal_clicked().connect(
// Each time the button is clicked, the "hilighting" of the buttons
// in the container progresses until completed:
[this]()
{
Gtk::Widget* child = m_container.get_child_at(m_progressTracker, 0);
if(child != nullptr)
{
std::cout << "Making progress ..." << std::endl;
// Change the button's background color:
Glib::RefPtr<Gtk::CssProvider> cssProvider = Gtk::CssProvider::create();
cssProvider->load_from_data("button {background-image: image(cyan);}");
child->get_style_context()->add_provider(cssProvider, GTK_STYLE_PROVIDER_PRIORITY_USER);
// Update for next child...
++m_progressTracker;
}
}
);
// Make m_container a child of the window:
add(m_container);
}
int main(int argc, char *argv[])
{
std::cout << "Gtkmm version : " << gtk_get_major_version() << "."
<< gtk_get_minor_version() << "."
<< gtk_get_micro_version() << std::endl;
auto app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base");
MainWindow window;
window.show_all();
return app->run(window);
}
In your case, you will have to adapt the container and the signal (maybe you will need something else to trigger the redraw), but it should work pretty much the same as far as changing the background color is concerned. You can build this (using Gtkmm 3.24) with:
g++ main.cpp -o example.out `pkg-config --cflags --libs gtkmm-3.0`

arduino not writing to sd card

I have an Arduino with a Seeedstudio sd card shield v4.0 with a prototpye shield above that, and on that is a TMP36 temperature sensor and a red and two green LEDs, the red to show that it is "Ready" to log data, the first green to show that it is currently "logging data" and the last LED to show that the data was "Saved" to the SD card, which it dosent, at the beggining of the file, however, it creates the line "Testing 1, 2, 3..." in a txt file called TEST. in that same file there should be the data, but there is no data, it will write to the card in setup, but not in loop. Can anyone help me?
Code:
#include <toneAC.h>
#include <SPI.h>
#include <SD.h>
int readyLED = 2;
int startLED = 8;
int buzzer = 7;
int tempSensor = A0;
int readyButton = 5;
int sampleNo = 0;
int button_mode = 1;
int saveLED = 4;
File myFile;
void setup() {
// put your setup code here, to run once:
pinMode(readyLED, OUTPUT);
digitalWrite(readyLED, HIGH);
pinMode(saveLED, OUTPUT);
digitalWrite(saveLED, LOW);
pinMode(startLED, OUTPUT);
pinMode(buzzer, OUTPUT);
pinMode(10, OUTPUT);
pinMode(tempSensor, INPUT);
pinMode(readyButton, INPUT);
digitalWrite(readyLED, HIGH);
digitalWrite(startLED, LOW);
Serial.begin(9600);
while (!Serial){}
Serial.println("Initializing SD card...");
if(!SD.begin(4)){
Serial.println("Failed!");
return;
}
Serial.println("Success!");
myFile = SD.open("test.txt", FILE_WRITE);
if (myFile) {
Serial.println("Writing to test.txt...");
myFile.println("testing 1, 2, 3.");
delay(500);
myFile.close();
Serial.println("done.");
} else {
// if the file didn't open, print an error:
Serial.println("error opening test.txt");
}
}
void loop() {
// put your main code here, to run repeatedly:
digitalWrite(readyLED, HIGH);
digitalWrite(startLED, LOW);
delay(700);
digitalWrite(startLED, HIGH);
delay(650);
int reading = analogRead(tempSensor);
float voltage = reading * 5.0;
voltage /= 1024.0;
float temperatureC = (voltage - 0.5) * 100;
float temperatureF = (temperatureC * 9.0 / 5.0) + 32.0;
Serial.print("Sample No. ");
sampleNo = sampleNo + 1;
Serial.print(sampleNo);
Serial.print(" Temperature: ");
Serial.print(temperatureF);
Serial.println(" F");
myFile = SD.open("test.txt", FILE_WRITE);
if(myFile){
Serial.println("Test.txt");
}
while(myFile.available()){
myFile.print("Sample No. ");
myFile.print(sampleNo);
myFile.print(" Temperature: ");
myFile.print(temperatureF);
myFile.println(" F");
}
delay(30);
digitalWrite(saveLED, HIGH);
delay(10);
digitalWrite(saveLED, LOW);
delay(10);
myFile.close();
}
You may want to check to make sure your while loop is actually being run. Since you know you can write to the SD card from void setup() you know your code inside the while loop works, but is the while loop actually being run, or is it evaluating to false and being skipped?
Have you considered the time it takes to write down data as an issue? You may be asking for it write down data before the Arduino code has time to process.

using opencv features2d on iPhone

I'm trying to use feature detection via OpenCV on iOS and I'm running into a conundrum:
features2d relies on highgui
highgui can't be built on iOS (or at least not that I can figure out).
This leads me to believe: features2d just can't be used on iOS without rewriting the module to remove the calls to cvSaveImage() and cvLoadImage(). Is this wrong? Anyone run into this and solved it?
You are taking the wrong aproach, you dont need highgui since that library is only ment to make it easier for you to handle the results of your processing, you can simply do those steps manually.
for example, consider this HOG example:
#include <iostream>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
int
main(int argc, char *argv[])
{
const char *imagename = argc > 1 ? argv[1] : "../../image/pedestrian.png";
cv::Mat img = cv::imread(imagename, 1);
if(img.empty()) return -1;
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
std::vector<cv::Rect> found;
// 画像,検出結果,閾値(SVMのhyper-planeとの距離),
// 探索窓の移動距離(Block移動距離の倍数),
// 画像外にはみ出た対象を探すためのpadding,
// 探索窓のスケール変化係数,グルーピング係数
hog.detectMultiScale(img, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);
std::vector<cv::Rect>::const_iterator it = found.begin();
std::cout << "found:" << found.size() << std::endl;
for(; it!=found.end(); ++it) {
cv::Rect r = *it;
// 描画に際して,検出矩形を若干小さくする
r.x += cvRound(r.width*0.1);
r.width = cvRound(r.width*0.8);
r.y += cvRound(r.height*0.07);
r.height = cvRound(r.height*0.8);
cv::rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 3);
}
// 結果の描画
cv::namedWindow("result", CV_WINDOW_AUTOSIZE|CV_WINDOW_FREERATIO);
cv::imshow( "result", img );
cv::waitKey(0);
}
it is made for a non iOS enviroment, however you can simply replace all highgui calls for
native iOS stuff.
You can get a very good image handling for opencv library from here:
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
so what you should really care about in that code is just this part:
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
std::vector<cv::Rect> found;
// 画像,検出結果,閾値(SVMのhyper-planeとの距離),
// 探索窓の移動距離(Block移動距離の倍数),
// 画像外にはみ出た対象を探すためのpadding,
// 探索窓のスケール変化係数,グルーピング係数
hog.detectMultiScale(img, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);
std::vector<cv::Rect>::const_iterator it = found.begin();
std::cout << "found:" << found.size() << std::endl;
for(; it!=found.end(); ++it) {
cv::Rect r = *it;
// 描画に際して,検出矩形を若干小さくする
r.x += cvRound(r.width*0.1);
r.width = cvRound(r.width*0.8);
r.y += cvRound(r.height*0.07);
r.height = cvRound(r.height*0.8);
cv::rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 3);
}
For a BRIEF:
// You get your img into a cv mat from the uiimage or whatever.
cv::Mat gray_img;
cv::cvtColor(img, gray_img, CV_BGR2GRAY);
cv::normalize(gray_img, gray_img, 0, 255, cv::NORM_MINMAX);
std::vector<cv::KeyPoint> keypoints;
std::vector<cv::KeyPoint>::iterator itk;
cv::Mat descriptors;
//
// threshold=0.05, edgeThreshold=10.0
cv::SiftFeatureDetector detector(0.05,10.0);
detector.detect(gray_img, keypoints);
// Brief に基づくディスクリプタ抽出器
cv::BriefDescriptorExtractor extractor;
cv::Scalar color(50,50,155);
extractor.compute(gray_img, keypoints, descriptors);
// 32次元の特徴量 x keypoint数
for(int i=0; i<descriptors.rows; ++i) {
cv::Mat d(descriptors, cv::Rect(0,i,descriptors.cols,1));
std::cout << i << ": " << d << std::endl;
}
And you have your result.

ClearCanvas DICOM - How to create a Tag with a 'VR' of 'OW'

Ok, so what i am doing is adding a new Overlay to an existing DICOM file & saving it(The DICOM file now has two overlays). Everything saves without errors & both DICOM viewers Sante & ClearCanvas-Workstation open the file, but only Sante displays both overlays.
Now when I look at the tags within the DICOM file, the OverlayData(6000) 'VR' is 'OW' & the OverlayData(6002) 'VR' is 'OB'.
So my problem is how to create a new Tag with a 'VR' of 'OW' because that is the correct one to use for OverlayData.
Here is the code i'm using to add the new Overlay to the DicomFile.DataSet::
NOTE, after I create the overlay I do write visible pixel data into it.
void AddOverlay()
{
int newOverlayIndex = 0;
for(int i = 0; i != 16; ++i)
{
if(!DicomFile.DataSet.Contains(GetOverlayTag(i, 0x3000)))
{
newOverlayIndex = i;
break;
}
}
//Columns
uint columnsTag = GetOverlayTag(newOverlayIndex, 0x0011);
DicomFile.DataSet[columnsTag].SetUInt16(0, (ushort)CurrentData.Width);
//Rows
uint rowTag = GetOverlayTag(newOverlayIndex, 0x0010);
DicomFile.DataSet[rowTag].SetUInt16(0, (ushort)CurrentData.Height);
//Type
uint typeTag = GetOverlayTag(newOverlayIndex, 0x0040);
DicomFile.DataSet[typeTag].SetString(0, "G");
//Origin
uint originTag = GetOverlayTag(newOverlayIndex, 0x0050);
DicomFile.DataSet[originTag].SetUInt16(0, 1);
DicomFile.DataSet[originTag].SetUInt16(1, 1);
//Bits Allocted
uint bitsAllocatedTag = GetOverlayTag(newOverlayIndex, 0x0100);
DicomFile.DataSet[bitsAllocatedTag].SetUInt16(0, 1);
//Bit Position
uint bitPositionTag = GetOverlayTag(newOverlayIndex, 0x0100);
DicomFile.DataSet[bitPositionTag].SetUInt16(0, 0);
//Data
uint dataTag = GetOverlayTag(newOverlayIndex, 0x3000);
DicomFile.DataSet[dataTag].SetNullValue();//<<< Needs to be something else
byte[] bits = new byte[(CurrentData.Width*CurrentData.Height)/8];
for(int i = 0; i != bits.Length; ++i) bits[i] = 0;
DicomFile.DataSet[dataTag].Values = bits;
}
public static uint GetOverlayTag(int overlayIndex, short element)
{
short group = (short)(0x6000 + (overlayIndex*2));
byte[] groupBits = BitConverter.GetBytes(group);
byte[] elementBtis = BitConverter.GetBytes(element);
return BitConverter.ToUInt32(new byte[]{elementBtis[0], elementBtis[1], groupBits[0], groupBits[1]}, 0);
}
So it would seem to me there would be some method like 'DicomFile.DataSet[dataTag].SetNullValue();' to create the tag with a 'VR' of 'OW'. Or maybe theres a totally different way to add an overlay in ClearCanvas idk...
Ok, my confusion was accually caused by a bug in my program.
I was trying to create the "Bit Position" tag by using element "0x0100" instead of "0x0102".
OW vs OB is irrelevant.
Sorry about that...

Get size of image without loading in to memory

I have several .png images (ETA: but the format could also be JPEG or something else) that I am going to display in UITableViewCells. Right now, in order to get the row heights, I load in the images, get their size properties, and use that to figure out how high to make the rows (calculating any necessary changes along the way, since most of the images get resized before being displayed). In order to speed things up and reduce memory usage, I'd like to be able to get size without loading the images. Is there a way to do this?
Note: I know that there are a number of shortcuts I could implement to eliminate this issue, but for several reasons I can't resize images in advance or collect the image sizes in advance, forcing me to get this info at run time.
It should be pretty simple. PNG spec has an explanation of a PNG datastream (which is effectively a file). IHDR section contains information about image dimensions.
So what you have to do is to read in PNG "magic value" and then read two four-byte integers, which will be width and height, respectively. You might also need to reorder bits in these values (not sure how are they stored), but once you figure that out, it will be very simple.
As of iOS SDK 4.0, this task can be accomplished with the ImageIO framework (CGImageSource...). I have answered a similar question here.
imageUrl is an NSURL, also import ImageIO/ImageIO.h with <> around it.
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef)imageUrl, NULL);
if (!imageSourceRef)
return;
CFDictionaryRef props = CGImageSourceCopyPropertiesAtIndex(imageSourceRef, 0, NULL);
NSDictionary *properties = (NSDictionary*)CFBridgingRelease(props);
if (!properties) {
return;
}
NSNumber *height = [properties objectForKey:#"PixelHeight"];
NSNumber *width = [properties objectForKey:#"PixelWidth"];
int height = 0;
int width = 0;
if (height) {
height = [height intValue];
}
if (width) {
width = [width intValue];
}
Note: This function doesn't work with iPhone compressed PNGs, this compression is automatically performed by XCode and change the image header, see more details here and how to disable this feature: http://discussions.apple.com/thread.jspa?threadID=1751896
Future versions of PSFramework will interpret this headers too, stay tuned.
See this function, she does just that. Reads only 30 bytes of the PNG file and returns the size (CGSize). This function is part of a framework for processing images called PSFramework (http://sourceforge.net/projects/photoshopframew/). Not yet implemented for other image formats, developers are welcome. The project is Open Source under the GNU License.
CGSize PSPNGSizeFromMetaData( NSString* anFileName ) {
// File Name from Bundle Path.
NSString *fullFileName = [NSString stringWithFormat:#"%#/%#", [[NSBundle mainBundle] bundlePath], anFileName ];
// File Name to C String.
const char* fileName = [fullFileName UTF8String];
/* source file */
FILE * infile;
// Check if can open the file.
if ((infile = fopen(fileName, "rb")) == NULL)
{
NSLog(#"PSFramework Warning >> (PSPNGSizeFromMetaData) can't open the file: %#", anFileName );
return CGSizeZero;
}
////// ////// ////// ////// ////// ////// ////// ////// ////// ////// //////
// Lenght of Buffer.
#define bytesLenght 30
// Bytes Buffer.
unsigned char buffer[bytesLenght];
// Grab Only First Bytes.
fread(buffer, 1, bytesLenght, infile);
// Close File.
fclose(infile);
////// ////// ////// ////// //////
// PNG Signature.
unsigned char png_signature[8] = {137, 80, 78, 71, 13, 10, 26, 10};
// Compare File signature.
if ((int)(memcmp(&buffer[0], &png_signature[0], 8))) {
NSLog(#"PSFramework Warning >> (PSPNGSizeFromMetaData) : The file (%#) don't is one PNG file.", anFileName);
return CGSizeZero;
}
////// ////// ////// ////// ////// ////// ////// ////// ////// //////
// Calc Sizes. Isolate only four bytes of each size (width, height).
int width[4];
int height[4];
for ( int d = 16; d < ( 16 + 4 ); d++ ) {
width[ d-16] = buffer[ d ];
height[d-16] = buffer[ d + 4];
}
// Convert bytes to Long (Integer)
long resultWidth = (width[0] << (int)24) | (width[1] << (int)16) | (width[2] << (int)8) | width[3];
long resultHeight = (height[0] << (int)24) | (height[1] << (int)16) | (height[2] << (int)8) | height[3];
// Return Size.
return CGSizeMake( resultWidth, resultHeight );
}
//Here's a quick & dirty port to C#
public static Size PNGSize(string fileName)
{
// PNG Signature.
byte[] png_signature = {137, 80, 78, 71, 13, 10, 26, 10};
try
{
using (FileStream stream = File.OpenRead(fileName))
{
byte[] buf = new byte[30];
if (stream.Read(buf, 0, 30) == 30)
{
int i = 0;
int imax = png_signature.Length;
for (i = 0; i < imax; i++)
{
if (buf[i] != png_signature[i])
break;
}
// passes sig test
if (i == imax)
{
// Calc Sizes. Isolate only four bytes of each size (width, height).
// Convert bytes to integer
int resultWidth = buf[16] << 24 | buf[17] << 16 | buf[18] << 8 | buf[19];
int resultHeight = buf[20] << 24 | buf[21] << 16 | buf[22] << 8 | buf[23];
// Return Size.
return new Size( resultWidth, resultHeight );
}
}
}
}
catch
{
}
return new Size(0, 0);
}
This is nicely implemented in Perl's Image::Size module for about a dozen formats -- including PNG and JPEG. In order to re-implement it in Objective C just take the perl code and read it as pseudocode ;-)
For instance, pngsize() is defined as
# pngsize : gets the width & height (in pixels) of a png file
# cor this program is on the cutting edge of technology! (pity it's blunt!)
#
# Re-written and tested by tmetro#vl.com
sub pngsize
{
my $stream = shift;
my ($x, $y, $id) = (undef, undef, "could not determine PNG size");
my ($offset, $length);
# Offset to first Chunk Type code = 8-byte ident + 4-byte chunk length + 1
$offset = 12; $length = 4;
if (&$read_in($stream, $length, $offset) eq 'IHDR')
{
# IHDR = Image Header
$length = 8;
($x, $y) = unpack("NN", &$read_in($stream, $length));
$id = 'PNG';
}
($x, $y, $id);
}
jpegsize is only a few lines longer.
Try using the CGImageCreateWithPNGDataProvider and CGImageCreateWithJPEGDataProvider functions. I don't know whether they're lazy enough or not, or whether that's even possible for JPEG, but it's worth trying.
low tech solutions:
if you know what the images are beforehand, store the image sizes along with their filenames in an XML file or plist (or whichever way you prefer) and just read those properties in.
if you don't know what the images are (i.e. they're going to be defined at runtime), then you must've had the images loaded at one time or another. the first time you do have them loaded, save their height and width in a file so you can access it later.