How to get high quality image avatar smack xmpp facebook - facebook

I used Vcard smack for get avatar image but it return to me image avatar size 32*32
It small image so I want to get high quality such as facebook do or other app do
could someone help me?
I try to search in gooogle but almost of thread said using Vcard for get avatar
I use to method below for get avatar with Vcard
=> my solution is using graphic api from facebook:
String urlAvatar = "https://graph.facebook.com/" + StringUtils.parseName(childrenEntryItems.getJid()).
replace("-", "") + "/picture?type=normal";
public static byte[] getAvatarByteArray(XMPPConnection xmppConnection, String user) {
VCard vCard = new VCard();
SmackConfiguration.setPacketReplyTimeout(30000);
// ProviderManager.getInstance().addIQProvider("vCard", "vcard-temp",
// new VCardProvider());
try {
vCard.load(xmppConnection, user);
} catch (XMPPException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
// Log.d("Giang", vCard.toXML() + " byte length = "
// + vCard.getAvatar().length); // complete VCard information
return vCard.getAvatar();
}
public static Bitmap makeBitemap(byte[] value) {
if (value == null)
return null;
// Load only size values
BitmapFactory.Options sizeOptions = new BitmapFactory.Options();
sizeOptions.inJustDecodeBounds = true;
BitmapFactory.decodeByteArray(value, 0, value.length, sizeOptions);
// Calculate factor to down scale image
int scale = 1;
int width_tmp = sizeOptions.outWidth;
int height_tmp = sizeOptions.outHeight;
while (width_tmp / 2 >= 256 && height_tmp / 2 >= 256) {
scale *= 2;
width_tmp /= 2;
height_tmp /= 2;
}
// Load image
BitmapFactory.Options resultOptions = new BitmapFactory.Options();
resultOptions.inSampleSize = scale;
return BitmapFactory.decodeByteArray(value, 0, value.length,
resultOptions);
}

I think the best deal would be to store url of avatar in one of the custom v-cards and fetch it over HTTP using some image loading library. Remember that xmpp is a stream based architecture and I would be cautious to block the stream by sending large files.

Related

How can I download multiple images from Firestore Storage on a Flutter Web mobile app?

Following Firestore documentation and other posts, I arrived at the following function:
Future<void> downloadProductImages() async {
//TODO: modify on suppliers admin that images are saved with the correct name.
final storageRef = FirebaseStorage.instance.ref();
int count = 1;
for (final ref in product.imgsMap.keys.toList()) {
print(ref);
try {
final childRef = storageRef.child(ref);
const maxSize = 10 * 1024 * 1024;
final Uint8List data = (await childRef.getData(maxSize))!;
//check if file is jpeg
late String ext;
if (data[0] == 0xFF &&
data[1] == 0xD8 &&
data[data.length - 2] == 0xFF &&
data[data.length - 1] == 0xD9) {
ext = 'jpeg';
} else {
ext = 'png';
}
final content = base64Encode(data!.toList());
AnchorElement(href: "${product.imgsMap[ref]}")
..setAttribute("download", " ${product.name}_$count.$ext")
..setAttribute("type", "image/$ext")
..click();
count++;
} on FirebaseException catch (e) {
//TODO: HANDLE EXCEPTIONS
print(e);
} on Exception catch (e) {
// Handle any errors.
print(e);
}
}
// File file = // generated somewhere
// final rawData = file.readAsBytesSync();
}
I am able to download multiples files, however the images are not recognized as such on Chrome Mobile's downloads, which is my main target.
I am guessing that I am not building correctly the Anchor Element.
How can I fix this method or is there a different way to download multiple images on Flutter Web?

How can I download multiple images from Firebase Storage on Flutter Web?

I am developing a Flutter Web application that, after clicking a download button, I need to download multiple images from Firebase Storage.
How can I don this on Flutter Web?
UPDATE:
After following Frank's suggestion on the comments, and other posts. I wrote the following function:
Future<void> downloadProductImages() async {
//TODO: modify on suppliers admin that images are saved with the correct name.
final storageRef = FirebaseStorage.instance.ref();
int count = 1;
for (final ref in product.imgsMap.keys.toList()) {
print(ref);
try {
final childRef = storageRef.child(ref);
const maxSize = 10 * 1024 * 1024;
final Uint8List data = (await childRef.getData(maxSize))!;
//check if file is jpeg
late String ext;
if (data[0] == 0xFF &&
data[1] == 0xD8 &&
data[data.length - 2] == 0xFF &&
data[data.length - 1] == 0xD9) {
ext = 'jpeg';
} else {
ext = 'png';
}
// Data for "images/island.jpg" is returned, use this as needed.
final content = base64Encode(data!.toList());
AnchorElement(
href:
"data:application/octet-stream;charset=utf-16le;base64,$content")
//href: "image/$ext;charset=utf-16le;base64,$content")
..setAttribute("download", "${product.name}_$count.$ext")
..click();
count++;
} on FirebaseException catch (e) {
//TODO: HANDLE EXCEPTIONS
print(e);
} on Exception catch (e) {
// Handle any errors.
print(e);
}
}
// File file = // generated somewhere
// final rawData = file.readAsBytesSync();
}
On chrome mobile, the files are downloaded but are not recognized as pictures. It seems that the AnchorElment doesn't have the correct href.
Any ideas?

Unable to get Metadata form video using exoplayer without playback

I am trying to get video metadata using exoplayer without playback as mentioned in document
when i try to run i cannot get list of audio track and subtitle track from local video file.
MediaItem mediaItem = MediaItem.fromUri(videoUrl);
ListenableFuture<TrackGroupArray> trackGroupsFuture = MetadataRetriever.retrieveMetadata(this, mediaItem);
Futures.addCallback(trackGroupsFuture, new FutureCallback<TrackGroupArray>() {
#Override
public void onSuccess(TrackGroupArray trackGroups) {
for(int i = 0; i < trackGroups.length; i++){
String format = trackGroups.get(i).getFormat(0).sampleMimeType;
String lang = trackGroups.get(i).getFormat(0).language;
String id = trackGroups.get(i).getFormat(0).id;
if(format.contains("audio") && id != null && lang != null){
Log.d(TAG, "onSuccess: " + lang + " " + id);
}
}
}
#Override
public void onFailure(Throwable t) {
//handleFailure(t);
Log.d(TAG, "onFailure: " + t);
}
}, executor);
can someone help. How can I get list of tracks available in a video.
here is a sample video link https://storage.googleapis.com/jlplayer/Dolittle.mkv
for example the above sample video contains a english subtitle a hindi audio track and english audio track. So how do i extract those track ?

How do I upload files from a phone to my Amazon S3 server?

I'm developing a mobile app with Unity and using S3 to store and retrieve assets, I can download asset bundles just fine from the server to the phone, but how do I upload files from the phone to the server?
I used the PostObject function from the AWS Unity SDK, and it works fine if I upload from the computer as I know the directory, but I'm not sure how to get the phone's photo gallery to upload to the s3 server.
This is the PostObject function
public void PostObject(string fileName)
{
ResultText.text = "Retrieving the file";
var stream = new FileStream("file://" + Application.streamingAssetsPath + "/" + fileName,
FileMode.Open, FileAccess.Read, FileShare.Read);
Debug.Log("kek");
ResultText.text += "\nCreating request object";
var request = new PostObjectRequest()
{
Bucket = S3BucketName,
Key = fileName,
InputStream = stream,
CannedACL = S3CannedACL.Private,
Region = _S3Region
};
ResultText.text += "\nMaking HTTP post call";
Client.PostObjectAsync(request, (responseObj) =>
{
if (responseObj.Exception == null)
{
ResultText.text += string.Format("\nobject {0} posted to bucket {1}",
responseObj.Request.Key, responseObj.Request.Bucket);
}
else
{
ResultText.text += "\nException while posting the result object";
ResultText.text += string.Format("\n receieved error {0}",
responseObj.Response.HttpStatusCode.ToString());
}
});
}
And this is where I'm using it to upload the picture taken from the phone to the server
public void TakePicture(int maxSize)
{
NativeCamera.Permission permission = NativeCamera.TakePicture((path) =>
{
Debug.Log("Image path: " + path);
if (path != null)
{
// Create a Texture2D from the captured image
Texture2D imageTexture = NativeCamera.LoadImageAtPath(path, maxSize);
if (imageTexture == null)
{
Debug.Log("Couldn't load texture from " + path);
return;
}
//picturePreview.gameObject.SetActive(true);
//picturePreview.texture = imageTexture;
Texture2D readableTexture = DuplicateTexture(imageTexture);
StartCoroutine(AddImageJob(readableTexture));
//Saves taken photo to the Image Gallery
if (isSaveFiles)
{
NativeGallery.SaveImageToGallery(imageTexture, "AReview", "test");
//Upload to Amazon S3
aws.PostObject(imageTexture.name);
aws.PostObject("test");
}
}
}, maxSize);
Debug.Log("Permission result: " + permission);
}
Any clues?
Thank you.

tool chain allowing two-way communication between a D app and a browser

I wish to have an app written in the D programming language update its display in a browser. The browser should also send input data back to the app.
I'm still quite new to programming and am confused with how sockets/websockets/servers all fit together. Can anyone suggest an approach?
Many thanks to gmfawcett for the link to his basic D server example which I've mated with a bare-bones websocket implementation of the version 8 spec that I found elsewhere (currently only works in Chrome 14/15, I believe). It's pretty much cut'n'paste but seems to work well enough and I expect it will be sufficient in serving my needs.
If anyone has the inclination to cast a quick eye over my code for any glaring no-nos, please feel free to do so - and thanks!
Bare-bones websocket impl: http://blog.vunie.com/implementing-websocket-draft-10
Websocket v8 spec (protocol-17): https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17
module wsserver;
import std.algorithm;
import std.base64;
import std.conv;
import std.stdio;
import std.socket;
import std.string;
//std.crypto: https://github.com/pszturmaj/phobos/tree/master/std/crypto
import crypto.hash.base;
import crypto.hash.sha;
struct WsServer
{
private
{
Socket s;
Socket conn;
string subProtocol;
}
this(string host, ushort port = 8080, string subProtocol = "null")
{
this.subProtocol = subProtocol;
s = new TcpSocket(AddressFamily.INET);
s.bind(new InternetAddress(host, port));
s.listen(8);
conn = s.accept();
writeln("point/refresh your browser to \"http://", host, "\" to intiate the websocket handshake");
try
{
initHandshake(conn);
}
catch (Throwable e)
{
stderr.writeln("thrown: ", e);
}
}
~this()
{
conn.shutdown(SocketShutdown.BOTH);
conn.close();
s.shutdown(SocketShutdown.BOTH);
s.close();
}
string data()
{
ubyte[8192] msgBuf;
auto msgBufLen = conn.receive(msgBuf);
auto firstByte = msgBuf[0];
auto secondByte = msgBuf[1];
// not sure these two checks are woking correctly!!!
enforce((firstByte & 0x81), "Fragments not supported"); // enforce FIN bit is present
enforce((secondByte & 0x80), "Masking bit not present"); // enforce masking bit is present
auto msgLen = secondByte & 0x7f;
ubyte[] mask, msg;
if(msgLen < 126)
{
mask = msgBuf[2..6];
msg = msgBuf[6..msgBufLen];
}
else if (msgLen == 126)
{
mask = msgBuf[4..8];
msg = msgBuf[8..msgBufLen];
}
foreach (i, ref e; msg)
e = msg[i] ^ mask[i%4];
debug writeln("Client: " ~ cast(string) msg);
return cast(string) msg;
}
void data(string msg)
{
ubyte[] newFrame;
if (msg.length > 125)
newFrame = new ubyte[4];
else
newFrame = new ubyte[2];
newFrame[0] = 0x81;
if (msg.length > 125)
{
newFrame[1] = 126;
newFrame[2] = cast(ubyte) msg.length >> 8;
newFrame[3] = msg.length & 0xFF;
}
else
newFrame[1] = cast(ubyte) msg.length;
conn.send(newFrame ~= msg);
debug writeln("Server: " ~ msg);
}
private void initHandshake(Socket conn)
{
ubyte[8192] buf; // big enough for some purposes...
size_t position, headerEnd, len, newpos;
// Receive the whole header before parsing it.
while (true)
{
len = conn.receive(buf[position..$]);
debug writeln(cast(string)buf);
if (len == 0) // empty request
return;
newpos = position + len;
headerEnd = countUntil(buf[position..newpos], "\r\n\r\n");
position = newpos;
if (headerEnd >= 0)
break;
}
// Now parse the header.
auto lines = splitter(buf[0..headerEnd], "\r\n");
string request_line = cast(string) lines.front;
lines.popFront;
// a very simple Header structure.
struct Pair
{
string key, value;
this(ubyte[] line)
{
auto tmp = countUntil(line, ": ");
key = cast(string) line[0..tmp]; // maybe down-case these?
value = cast(string) line[tmp+2..$];
}
}
Pair[] headers;
foreach(line; lines)
headers ~= Pair(line);
auto tmp = splitter(request_line, ' ');
string method = tmp.front; tmp.popFront;
string url = tmp.front; tmp.popFront;
string protocol = tmp.front; tmp.popFront;
enum GUID_v8 = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"; // version 8 spec... might change
auto sha1 = new SHA1;
sha1.put(strip(headers[5].value) ~ GUID_v8);
auto respKey = to!string(Base64.encode(sha1.finish()));
// Prepare a response, and send it
string resp = join(["HTTP/1.1 101 Switching Protocols",
"Upgrade: websocket",
"Connection: Upgrade",
"Sec-WebSocket-Accept: " ~ respKey,
"Sec-WebSocket-Protocol: " ~ subProtocol,
""],
"\r\n");
conn.send(cast(ubyte[]) (resp ~ "\r\n"));
debug writeln(resp);
}
}