Send IR signals iPhone - iphone

I would develop an App to control some IR receivers from my iPhone.
I used Arduino to detect the values of some command from a remote, and now I have something like this:
ON
1250, 450, 1200, 450, 350, 1300, 1250, 450, 1200, 450, 400, 1300, 350, 1300, 350, 1300, 400, 1300, 350, 1300, 1200, 450, 350
OFF
1150, 550, 1150, 500, 350, 1300, 1150, 550, 1150, 500, 300, 1350, 350, 1300, 400, 1300, 350, 1300, 350, 1350, 350, 1300, 1200
I have build a transmitter based on jack 3.5, so now a would send this values through the jack of iPhone.
How can I do it? Is there some library or framework in Swift or Objective-C that can help me?
In App Store there is an App called "Tv Remote" that does effectively what I want, and It works very well with my Samsung TV, but It has its database of values and only for some Tv. So now I would like develop an app like that to control my electric fun or my led stripe or other.
Can you give me some advice please?

You can read any data through the iPhone's headjack, if the bandwidth of the signal fits into the bandwidth of the iPhone's A/D-converter, which is about 20Hz to 20kHz.
Also look at this project: https://code.google.com/p/hijack-main/
Basically it describes hijacking power and bandwidth from the mobile phone's audio interface. Creating a cubic-inch peripheral sensor ecosystem for the mobile phone.
The incoming data must be modulated at frequencies within the passband of the iPhone microphone input. Although many have suggested that this limits the data rate, in fact 19 kHz audio is a very wideband signal, capable of dozens of kilobits per second.
There's also the following library: http://www.crudebyte.com/jack-ios/sdk/
Sources: iPhone headphone jack - read in data?

Related

flutter compressing images resolves in bad quality

I am using flutter_image_compress to compress images. I wrote a method that takes a fileSize and tries to reduce the image down to that fileSize with best quality. Here is the method:
Future<File?> compressImage(File image, {int kb = 50}) async {
var inPath = image.absolute.path;
int q = 95;
File? result = image;
while (q >= 1 && result != null && (await result.length() > kb * 1024)) {
result = await FlutterImageCompress.compressAndGetFile(
inPath,
outPath,
quality: q,
minHeight: 1500,
minWidth: 1500,
);
q = q ~/ 2;
}
return result;
}
So basically this method checks whether the image is already smaller than the given size, and if not, compresses the image down. For every run, the quality gets reduces by 50%, until it is 1.
This method works, but I get very different image qualities for the same sizes on my iPhone and Android Phone.
on my iPhone (iPhone 11, iOS 15.5), I can compress the image down to 50kb and the quality is perfect, I don't see any difference. On my Android Phone (Samsung Galaxy A13, Android 12), if I compress the image down to 50kb, the quality is pretty bad.
Android Example:
initial length: 1213154
compress quality: 47
compressed length: 89085
iOS Example:
initial length: 1409313
compress quality: 2
compressed length: 76801
So, in the iOS example, I set had to use a quality of 2 to get it down to about 76kb and the result is perfect. On Android, I used a quality of 47 to get it down to 89kb, and the quality is pretty bad.
Is there a method how I can reduce the size as much as possible without really loosing quality of the image? Are there any other factors I missed? This does not really make sense to me, because the image on iOS is way better even though it is compressed to a way lower file size.
The package seems to provide native code written in Kotlin (for Android) and Swift (for iOS). So there may be differences between those two implementations, or even a bug in the Android code leading to worse quality.
You could file an issue on their github page.

Positioning widgets on top of Network Image based on X and Y coordinates

I want to position custom widgets on top of an image based on X and Y coordinates. Think of it as an overlay. Until now, I have tried a solution, where I used a Stack in a combination with Positioned, to position widgets above the image. The problem arises when I try this solution on different screen sizes. The overlaid widgets are off, depending on the screen size I'm testing on.
Here's my current implementation:
Expanded(
child: InteractiveViewer(
constrained: false,
minScale: 0.1,
maxScale: 2.0,
child: Stack(
children: [
Image.network(widget.plan.image),
Positioned(
bottom: 2927,
left: 6700,
child: SvgPicture.asset("assets/svg/pin.svg", height: 200)
)
],
)
),
)
Note that I'm also wrapping everything in InteractiveViewer because the Image I'm getting from the backend is very large.
EDIT: I have noted that for some reason the image dimensions are different on different displays. For example, photo dimensions on iPhone X are 10224x6526, where on iPhone 13 Pro Max image dimensions are 8192x5228. I am now investigating further why this is happening as this is probably the reason why custom widgets drawn on top are shifted on different screens.
EDIT 2: After a long research I've finally came across something. I own two physical devices - iPhone 12 and iPhone X. I was testing on simulator and something really odd happened; simulator is logging different image dimension simulating the same physical device - let me explain:
Original Image dimension coming from backend:
10224 × 6526
iPhone 12 simulator image dimension log after network call:
8192x5228
iPhone 12 PHYSICAL device image dimension log after network call:
10224 × 6526
iPhone X PHYSICAL device image dimension log after network call:
10224 × 6526
Which effectively means that something is working differently regarding the image scaling when using iOS simulator and physical device.
The best way to do this will be to try to position your items relative to the screen's width and height in percentage.
bottom: MediaQuery.of(context).size.height * 0.5 // 50% of the screen's height
left: MediaQuery.of(context).size.width * 0.3 // 30% of the screen's width
You are free to change the percentiles to suit you.
EDIT 2
x = (6700/width-of-photo) * 100
y = (2927/height-of-photo) * 100
With the issue concerning the size of the image, you might want to consider placing it inside a widget and giving it a max-height and max-width values.

Flutter MediaQuery.of(context).size.width values are different than real screen resolution

in my Flutter application I am trying to get the real screen width (that can naturally be different on each device).
I am using MediaQuery.of(context).size.width but I've noticed that the values returned do not match the real screen resolution.
For instance,
On an simulator iPhone 11 Pro Max (that has resolution 2688 x 1242) I get MediaQuery.of(context).size.width= 414
On an emulator Nexus XL (that has resolution 1440 x 2560) I get MediaQuery.of(context).size.width = 411.42857142857144
On a real device iPhone 7 (that has resolution 1,334 x 750) I get MediaQuery.of(context).size.width = 375
Does anyone know why the value returned by MediaQuery differ from the real screen resolution in pixels?
Thanks
According to the size property's documentation :
The size of the media in logical pixels (e.g, the size of the screen).
Logical pixels are roughly the same visual size across devices.
Physical pixels are the size of the actual hardware pixels on the
device. The number of physical pixels per logical pixel is described
by the devicePixelRatio.
So you would do MediaQuery.of(context).size.width * MediaQuery.of(context).devicePixelRatioto get the width in physical pixels.

PDF keep the TrimBox after PDF rotation

I´ve came across a challenge of creating a trimbox in a PDF with portrait orientation and them rotate it to the landscape orientation, that i did.
pageDict.put(PdfName.ROTATE, new PdfNumber(rot + 270));
But my trimbox goes banana. When i define it again with the same coordinates and with the rotation that i defined the trimbox have the same look (perimeter) but the isn´t in the same place.
I´ve tried to define the trimbox calculating the new coordinates but the calculation don´t make much sense, because the calculations (supposed) are ok but the trimbox don´t appear in the place that was defined in the portrait orientation.
Say that i want to define a trimbox in portrait, say that inside is a picture, and when i rotate the page the trimbox continues to involve the same picture, how can this be done?
Anyone have any idea?
cheers
UPDATED:
This is the original TrimBox setted:
This is the TrimBox after doing the rotation of the PDF page:
What i want is to set keep the same TrimBox and in the same place independent if the page is rotated or not.
Cheers
UPDATED:
The rotation of the PDF is in the original POST.
The definition of the TrimBox is made by BIRT with a changed API made by us that defines the boxes, one is TrimBox and is made like this:
PdfArray array = (PdfArray) pageDict.get(PdfName.TRIMBOX);
PdfRectangle trimBox = new PdfRectangle(Float.parseFloat(array.getArrayList().get(0).toString()),Float.parseFloat(array.getArrayList().get(1).toString()),Float.parseFloat(array.getArrayList().get(2).toString()), Float.parseFloat(array.getArrayList().get(3).toString()));
pageDict.put(PdfName.TRIMBOX, trimBox);
About providing the files, i can´t because Stack don´t let me (need at least 10 reputation).
Cheers
UPDATED:
Original file
PDF file after rotation
Because of the reputation(only 2 links) i remove the image links.
I just compared the two files you provided in Adobe Acrobat. And it appears that the PDFs are just as they should be.
Test_Original.pdf:
Entries in page dictionary:
/MediaBox [0, 0, 666.29, 912.89]
/TrimBox [22, 22, 600, 400]
/BleedBox [22, 22, 600, 400]
/ArtBox [22, 22, 600, 400]
In Adobe Acrobat:
Test_After_ROTATED.pdf:
Entries in page dictionary:
/TrimBox [22, 22, 600, 400]
/MediaBox [0, 0, 666.29, 912.89]
/BleedBox [22, 22, 600, 400]
/ArtBox [22, 22, 600, 400]
/Rotate 270
In Adobe Acrobat:
Thus, everything is just fine, only the rotate was added in the page dictionary and, therefore, everything (including the trim box) is rotated.
Thus, I assume the program you use to display the trim box is somewhat limited when rotation is part of the game.
You mentioned BIRT. Maybe BIRT has its own set of information concerning the trim box stored somewhere else (according to the screenshots you used to have in your question with distances from top, left, right, and bottom). In that case those information might also have to be updated when rotating the PDF.
Or, as you use a changed API made by you that defines the boxes, maybe that API has to be told about rotation effects?

Why are RGB values inconsistent between iPhone and Mac OSX Color Meter (Particularly the Red channel)?

I'm programming for the iPhone and I have an 3-channel UIImage taken from the iPhone camera. I'm trying to get the RGB values for different areas on this image. I currently cross-reference the RGB outputs I get from the iPhone with the digital color meter that comes with Mac OSX.
Most of the values I obtain are fine, however for certain colors, the RGB values that I output vs. what the digital color meter read are very different.
For example, in the following link, I show an example of a square whose color that I calculate is different from the calculated value with the color meter.
http://www.learntobe.org/urs/square.php
Our calculated RGB from the iPhone is (41, 116, 86) for this square (also validated with the 'Color Expert' application.
The value calculated by the Apple Mac OSX color meter was measured to be (0, 121, 87).
Clearly, the R value is really off. All areas where there are color differences seem to be because of a huge discrepancy in the R values. Is there a specific reason for this?
Thanks for your help in advance!
This is to be expected.
iOS is not color managed, but Mac OS X is. This means that Mac OS X takes an image value of (255, 0, 0) and transforms it to a good match for the current display.
It is done so that you can have two displays, view a copy of an image on each display and both images will appear the same. For some pair of displays, (255, 0, 0) on one display may look the same as (233, 89, 31) on the other.