I'm rendering a signature on the native canvas for Android, iOS, and windows separately. Now, I need to save this locally, so I have converted the view to stream image source format. But I'm not sure how to convert to PNG and save it locally?.
You can use Dependency Injection to deal with it, implement the interface on the platform you want.
For example, I did a test on android:
The interface:
public interface MyService
{
public void Convert( string filename, ImageSource img);
}
}
The implemention:
public class AndroidService : MyService
{
public async void Convert(string filename, ImageSource img)
{
System.IO.Stream outputStream = null;
var handler = new FileImageSourceHandler();
Bitmap pic = await handler.LoadImageAsync(img, Android.App.Application.Context);
var savedImageFilename = System.IO.Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), filename);
bool success = false;
using (outputStream = new System.IO.FileStream(savedImageFilename, System.IO.FileMode.Create))
{
if (System.IO.Path.GetExtension(filename).ToLower() == ".png")
{
success = await pic.CompressAsync(Bitmap.CompressFormat.Png, 100, outputStream);
}
else
success = await pic.CompressAsync(Bitmap.CompressFormat.Jpeg, 100, outputStream);
}
}
}
Related
I have added my Android side code:
I know that I need to use a platform channel to pass data,I am unable to figure out:
import io.flutter.embedding.android.FlutterActivity;
public class MainActivity extends AppCompatActivity {
private Button Btn;
// Intent defaultFlutter=FlutterActivity.createDefaultIntent(activity);
String path;
private Button bt;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Btn = findViewById(R.id.btn);
isStoragePermissionGranted();
Btn.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view)
{
path=takeScreenshot();
// activity.startActivity(defaultFlutter);
}
});
//write flutter xode here
//FlutterActivity.createDefaultIntent(this);
}
private String takeScreenshot() {
Date now = new Date();
android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss", now);
try {
// image naming and path to include sd card appending name you choose for file
String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".jpg";
// create bitmap screen capture
View v1 = getWindow().getDecorView().getRootView();
v1.setDrawingCacheEnabled(true);
Bitmap bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
File imageFile = new File(mPath);
Log.d("path",mPath);
FileOutputStream outputStream = new FileOutputStream(imageFile);
int quality = 100;
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, outputStream);
outputStream.flush();
outputStream.close();
return mPath;
///openScreenshot(imageFile);
} catch (Throwable e) {
// Several error may come out with file handling or DOM
e.printStackTrace();
return "Error";
}
}
public boolean isStoragePermissionGranted() {
String TAG = "Storage Permission";
if (Build.VERSION.SDK_INT >= 23) {
if (this.checkSelfPermission(android.Manifest.permission.WRITE_EXTERNAL_STORAGE)
== PackageManager.PERMISSION_GRANTED) {
Log.v(TAG, "Permission is granted");
return true;
} else {
Log.v(TAG, "Permission is revoked");
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, 1);
return false;
}
}
else { //permission is automatically granted on sdk<23 upon installation
Log.v(TAG,"Permission is granted");
return true;
}
}
}
I will receive a file from the android side, upon receiving I need to display it in a flutter. I also need to use cached engine for transferring data as normally it would cause a delay
You can use the cached engine, this will help me cover up for the delay.
Then you can add a invoke method onpressed that you can send method name and the data you want to pass.
On flutter side,you can create a platform and invoke method through which you can receive requirements and further process it,
I am trying to send a stream (containing an image file) from a WASM client to a backend .NET Core 5 server. In the WASM app, I start with a MemoryStream that contains the file data. In order to send the data contained in this MemoryStream using HttpClient.PostAsync, I seem to have to convert it to a StreamContent object:
StreamContent streamContent = new StreamContent(imageMemoryStream);
I use the debugger to verify that the length of the content of streamContent is not zero at this point. So far so good.
I then use HttpClient.PostAsync to send this stream to the server:
var response = await Http.PostAsync("api/HttpStreamReceiver", streamContent);
On the server side, I have a controller that receives HTTP messages:
[Route("api/[controller]")]
[ApiController]
public class HttpStreamReceiverController : ControllerBase
{
[HttpPost]
public async Task<ActionResult> Get()
{
Stream imageStream;
try
{
imageStream = Request.Body;
}
catch (Exception)
{
return new BadRequestObjectResult("Error saving file");
}
}
}
Here, it seems that Request.Body is empty. Trying to evaluate the length of either Request.Body or of imageStream on the server side results in a System.NotSupportedException, and
await imageStream.ReadAsync(buffer);
leaves buffer blank. What am I doing wrong here?
The image file cannot be transmitted through the body unless it is serialized. I suggest you use MultipartFormDataContent to pass the file.
This is an example.
class Program
{
static async Task Main(string[] args)
{
string filePath = #"D:\upload\images\1.png";
HttpClient _httpClient = new HttpClient();
string _url = "https://localhost:44324/api/HttpStreamReceiver/";
if (string.IsNullOrWhiteSpace(filePath))
{
throw new ArgumentNullException(nameof(filePath));
}
if (!File.Exists(filePath))
{
throw new FileNotFoundException($"File [{filePath}] not found.");
}
//Create form
using var form = new MultipartFormDataContent();
FileStream fs = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.ReadWrite);
byte[] buffur = new byte[fs.Length];
BinaryWriter bw = new BinaryWriter(fs);
bw.Write(buffur);
//var bytefile = AuthGetFileData(filePath);
var fileContent = new ByteArrayContent(buffur);
fileContent.Headers.ContentType = MediaTypeHeaderValue.Parse("multipart/form-data");
form.Add(fileContent, "image", Path.GetFileName(filePath));
//the other data in form
var response = await _httpClient.PostAsync($"{_url}", form);
response.EnsureSuccessStatusCode();
var responseContent = await response.Content.ReadAsStringAsync();
bw.Close();
}
}
Web api.
[Route("api/[controller]")]
[ApiController]
public class HttpStreamReceiverController: ControllerBase
{
[HttpPost]
public async Task<ActionResult> Get(IFormFile image)
{
//...
return Ok("get");
}
}
Result:
Using OSMSharp I am having trouble to open a stream for a file (which I can provide on demand)
The error occurs in PBFReader (line 104)
using (var tmp = new LimitedStream(_stream, length))
{
header = _runtimeTypeModel.Deserialize(tmp, null, _blockHeaderType) as BlobHeader;
}
and states: "ProtoBuf.ProtoException: 'Invalid field in source data: 0'" which might mean different things as I have read in this SO question.
The file opens and is visualized with QGis so is not corrupt in my opinion.
Can it be that the contracts do not match? Is OsmSharp/core updated to the latest .proto files for OSM from here (although not sure if this is the real original source for the definition files).
And what might make more sense, can it be that the file I attached is generated for v2 of OSM PBF specification?
In the code at the line of the exception I see the following comment which makes me wonder:
// TODO: remove some of the v1 specific code.
// TODO: this means also to use the built-in capped streams.
// code borrowed from: http://stackoverflow.com/questions/4663298/protobuf-net-deserialize-open-street-maps
// I'm just being lazy and re-using something "close enough" here
// note that v2 has a big-endian option, but Fixed32 assumes little-endian - we
// actually need the other way around (network byte order):
// length = IntLittleEndianToBigEndian((uint)length);
BlobHeader header;
// again, v2 has capped-streams built in, but I'm deliberately
// limiting myself to v1 features
So this makes me wonder if OSM Sharp is (still) up-to-date.
My sandbox code looks like this:
using OsmSharp.Streams;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using OsmSharp.Tags;
namespace OsmSharp
{
class Program
{
private const string Path = #"C:\Users\Bernoulli IT\Documents\Applications\Argaleo\Test\";
private const string FileNameAntarctica = "antarctica-latest.osm";
private const string FileNameOSPbf = "OSPbf";
private const Boolean useRegisterSource = false;
private static KeyValuePair<string, string> KeyValuePair = new KeyValuePair<string, string>("joep", "monita");
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
//string fileName = $#"{Path}\{FileNameAntarctica}.pbf";
string fileName = $#"{Path}\{FileNameOSPbf}.pbf";
string newFileName = $"{fileName.Replace(".pbf", string.Empty)}-{Guid.NewGuid().ToString().Substring(0, 4)}.pbf";
Console.WriteLine("*** Complete");
string fileNameOutput = CompleteFlow(fileName, newFileName);
Console.WriteLine("");
Console.WriteLine("*** Display");
DisplayFlow(fileNameOutput);
Console.ReadLine();
}
private static string CompleteFlow(string fileName, string newFileName)
{
// 1. Open file and convert to bytes
byte[] fileBytes = FileToBytes(fileName);
// 2. Bytes to OSM stream source (pbf)
PBFOsmStreamSource osmStreamSource;
osmStreamSource = BytesToOsmStreamSource(fileBytes);
osmStreamSource.MoveNext();
if (osmStreamSource.Current() == null)
{
osmStreamSource = FileToOsmStreamSource(fileName);
osmStreamSource.MoveNext();
if (osmStreamSource.Current() == null)
{
throw new Exception("No current in stream.");
}
}
// 3. Add custom tag
AddTag(osmStreamSource);
// 4. OSM stream source to bytes
//byte[] osmStreamSourceBytes = OsmStreamSourceToBytes(osmStreamSource);
// 5. Bytes to file
//string fileNameOutput = BytesToFile(osmStreamSourceBytes, newFileName);
OsmStreamSourceToFile(osmStreamSource, newFileName);
Console.WriteLine(newFileName);
return newFileName;
}
private static void DisplayFlow(string fileName)
{
// 1. Open file and convert to bytes
byte[] fileBytes = FileToBytes(fileName);
// 2. Bytes to OSM stream source (pbf)
BytesToOsmStreamSource(fileBytes);
}
private static byte[] FileToBytes(string fileName)
{
Console.WriteLine(fileName);
return File.ReadAllBytes(fileName);
}
private static PBFOsmStreamSource BytesToOsmStreamSource(byte[] bytes)
{
MemoryStream memoryStream = new MemoryStream(bytes);
memoryStream.Position = 0;
PBFOsmStreamSource osmStreamSource = new PBFOsmStreamSource(memoryStream);
foreach (OsmGeo element in osmStreamSource.Where(osmGeo => osmGeo.Tags.Any(tag => tag.Key.StartsWith(KeyValuePair.Key))))
{
foreach (Tag elementTag in element.Tags.Where(tag => tag.Key.StartsWith(KeyValuePair.Key)))
{
Console.WriteLine("!!!!!!!!!!!!!! Tag found while reading !!!!!!!!!!!!!!!!!!".ToUpper());
}
}
return osmStreamSource;
}
private static PBFOsmStreamSource FileToOsmStreamSource(string fileName)
{
using (FileStream fileStream = new FileInfo(fileName).OpenRead())
{
PBFOsmStreamSource osmStreamSource = new PBFOsmStreamSource(fileStream);
return osmStreamSource;
}
}
private static void AddTag(PBFOsmStreamSource osmStreamSource)
{
osmStreamSource.Reset();
OsmGeo osmGeo = null;
while (osmGeo == null)
{
osmStreamSource.MoveNext();
osmGeo = osmStreamSource.Current();
if(osmGeo?.Tags == null)
{
osmGeo = null;
}
}
osmGeo.Tags.Add("joep", "monita");
Console.WriteLine($"{osmGeo.Tags.FirstOrDefault(tag => tag.Key.StartsWith(KeyValuePair.Key)).Key} - {osmGeo.Tags.FirstOrDefault(tag => tag.Key.StartsWith(KeyValuePair.Key)).Value}");
}
private static byte[] OsmStreamSourceToBytes(PBFOsmStreamSource osmStreamSource)
{
MemoryStream memoryStream = new MemoryStream();
PBFOsmStreamTarget target = new PBFOsmStreamTarget(memoryStream, true);
osmStreamSource.Reset();
target.Initialize();
UpdateTarget(osmStreamSource, target);
target.Flush();
target.Close();
return memoryStream.ToArray();
}
private static string BytesToFile(byte[] bytes, string fileName)
{
using (FileStream fs = new FileStream(fileName, FileMode.Create, FileAccess.Write))
{
fs.Write(bytes, 0, bytes.Length);
}
return fileName;
}
private static void OsmStreamSourceToFile(PBFOsmStreamSource osmStreamSource, string fileName)
{
using (FileStream fileStream = new FileInfo(fileName).OpenWrite())
{
PBFOsmStreamTarget target = new PBFOsmStreamTarget(fileStream, true);
osmStreamSource.Reset();
target.Initialize();
UpdateTarget(osmStreamSource, target);
target.Flush();
target.Close();
}
}
private static void UpdateTarget(OsmStreamSource osmStreamSource, OsmStreamTarget osmStreamTarget)
{
if (useRegisterSource)
{
osmStreamTarget.RegisterSource(osmStreamSource, osmGeo => true);
osmStreamTarget.Pull();
}
else
{
bool isFirst = true;
foreach (OsmGeo osmGeo in osmStreamSource)
{
Tag? tag = osmGeo.Tags?.FirstOrDefault(t => t.Key == KeyValuePair.Key);
switch (osmGeo.Type)
{
case OsmGeoType.Node:
if (isFirst)
{
for (int indexer = 0; indexer < 1; indexer++)
{
(osmGeo as Node).Tags.Add(new Tag(KeyValuePair.Key + Guid.NewGuid(), KeyValuePair.Value));
}
isFirst = false;
}
osmStreamTarget.AddNode(osmGeo as Node);
break;
case OsmGeoType.Way:
osmStreamTarget.AddWay(osmGeo as Way);
break;
case OsmGeoType.Relation:
osmStreamTarget.AddRelation(osmGeo as Relation);
break;
default:
throw new ArgumentOutOfRangeException();
}
}
}
}
}
}
Already I posted this question on the GITHube page of OSMSharp as is linked here. Any help would be very appreciated.
Signature pad sending empty
I'm using a Signature Pad, and Canvas by SkiaSharp, but when I sending after drawing by user and encode to base 64, on server only shows an empty canvas
async void OnSaveButtonClicked(object sender, EventArgs args)
{
using (SKImage image = SKImage.FromBitmap(saveBitmap))
{
try
{
SKData data = image.Encode(SKEncodedImageFormat.Png, 100);
var bytesImg = data.ToArray();
string imageBase64 = Convert.ToBase64String(bytesImg);
var respuesta = await this.ApiService.PostSignature(
this.url,
this.Id,
imageBase64
);
Method to send on Services...
public async Task PostSignature(
string urlBase,
string folio,
string imageBase64)
{
try
{
var client = new HttpClient();
var response = await client.PostAsync(urlBase,
new StringContent(string.Format(
"idReporte={0}&imgFirma={1}",
folio, imageEncoded),
Encoding.UTF8, "application/x-www-form-urlencoded"));
if (!response.IsSuccessStatusCode)
{
return response.ToString();
}
else
{
var result = await response.Content.ReadAsStringAsync();
return result;
}
}
catch
{
return null;
}
}
END REQUEST...
catch (Exception ex)
{
await Application.Current.MainPage.DisplayAlert(
"Error",
"Image Is not Send, error: " + ex.Message,
"OK"
);
}
finally
{
completedPaths.Clear();
inProgressPaths.Clear();
UpdateBitmap();
canvasView.InvalidateSurface();
}
The image is empty, is decoded OK and loaded in folder path.
According to your description, you want to get image from signature pad, and converting it into base 64, I do one simple that you can take a look,just cast your imagestream to memorystream
<StackLayout>
<forms:SignaturePadView
x:Name="signaturepad"
BackgroundColor="Black"
HeightRequest="350"
StrokeColor="White"
StrokeWidth="3"
WidthRequest="250" />
<Button
x:Name="save"
Clicked="Save_Clicked"
HeightRequest="50"
Text="save"
WidthRequest="200" />
</StackLayout>
private async void Save_Clicked(object sender, EventArgs e)
{
string base64String;
using (var memoryStream = new MemoryStream())
{
var signature = await signaturepad.GetImageStreamAsync(SignatureImageFormat.Png);
signature.CopyTo(memoryStream);
var byteArray = memoryStream.ToArray();
base64String = Convert.ToBase64String(byteArray);
}
}
I'm using JavaFX's Drag and Drop system in my application, and it has been working well so far.
Now I want to support drag and drop to outside applications, eg. dragging files from my application to the explorer. How would I achieve that?
I've achieved what you described by using:
Vector<File> files = new Vector<File>();
private ClipboardContent filesToCopyClipboard = new ClipboardContent();
...
final ObjectWithAReturnablePathField draggableObj = new ObjectWithAReturnablePathField();
...
draggableObj.setOnDragDetected(new EventHandler<MouseEvent>()
{
#Override
public void handle(MouseEvent me)
{
Dragboard db = draggableObj.startDragAndDrop(TransferMode.ANY);
try
{
File f = new File(new URI(draggableObj.getFilePath()));
files.add(f);
filesToCopyClipboard.putFiles(files);
}
catch (URISyntaxException e)
{
e.printStackTrace();
}
db.setContent(filesToCopyClipboard);
me.consume();
}
});
draggableObj.setOnDragDone(new EventHandler<DragEvent>()
{
#Override
public void handle(DragEvent me)
{
me.consume();
}
});
Which means:
It's possible to achieve file transference between JavaFX 2 and a native application by filling a ClipboardContent with a list using the TransferMode.ANY on the setOnDragDetected method of any Draggable Object (Any Node) which can return a Path for a file. In my case, I've created a class called Thumb extending ImageView and (among others things) I made a method called getFilePath() which returns the Path from the Image used to initialize the ImageView(). I'm sorry BTW for the poor example and the poor english, but I'm running out of time to give a more detailed answer as of now. I hope it helps. Cheers
Here is a sample source for an action listener on an ImageView image extraction to OS' explorer (With a custom process for jpg image to remove alpha-channel to display it correctly):
inputImageView.setOnDragDetected(new EventHandler <MouseEvent>() {
#Override
public void handle(MouseEvent event) {
// for paste as file, e.g. in Windows Explorer
try {
Clipboard clipboard Clipboard.getSystemClipboard();
Dragboard db = inputImageView.startDragAndDrop(TransferMode.ANY);
ClipboardContent content = new ClipboardContent();
Image sourceImage = inputImageView.getImage();
ImageInfo imageInfo = (ImageInfo) inputImageView.getUserData();
String name = FilenameUtils.getBaseName(imageInfo.getName());
String ext = FilenameUtils.getExtension(imageInfo.getName());
///Avoid get "prefix lenght too short" error when file name lenght <= 3
if (name.length() < 4){
name = name+Long.toHexString(Double.doubleToLongBits(Math.random()));;
}
File temp = File.createTempFile(name, "."+ext);
if (ext.contentEquals("jpg")|| ext.contentEquals("jpeg")){
BufferedImage image = SwingFXUtils.fromFXImage(sourceImage, null); // Get buffered image.
BufferedImage imageRGB = new BufferedImage(image.getWidth(),image.getHeight(),
BufferedImage.OPAQUE);
Graphics2D graphics = imageRGB.createGraphics();
graphics.drawImage(image, 0, 0, null);
ImageIO.write(imageRGB, ext, temp);
graphics.dispose();
ImageIO.write(imageRGB,
ext, temp);
}else{
ImageIO.write(SwingFXUtils.fromFXImage(sourceImage, null),
ext, temp);
}
content.putFiles(java.util.Collections.singletonList(temp));
db.setContent(content);
clipboard.setContent(content);
event.consume();
temp.deleteOnExit();
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
});
With the help of use of an Object that is passed to the imageView's setUserData method, it helps me to retrieve database id and pic name:
public class ImageInfo {
private String imageInfo;
private int inputId;
#Override
public String toString() {
return imageInfo;
}
public ImageInfo(String imageInfo, int inputId) {
this.imageInfo = imageInfo;
this.inputId = inputId;
}
public String getName() {
return imageInfo;
}
public void setName(String imageInfo) {
this.imageInfo = imageInfo;
}
public int getIndex() {
return inputId;
}
public void setIndex(int areaindex) {
this.inputId = inputId;
}
}
I hope it will help somenone at an expected time :-)
Regards