SVG and other picture format like PNG/JPEG support solution - flutter

As SVG format is not supported by flutter. I'm forced to use flutter_svg package which doesn't support .png.
So I'm looking for a solution which supports both SVG and other picture formats like .png,.jpg etc.
PS: the network url is mapped like example.com/media/id so there's no extention in the url.

You can use sample following methods
Widget getPicture(String url) {
var mimeType = getFileExtension(url);
if (mimeType == "svg") {
return SvgPicture.asset(
assetName,
color: Colors.red,
semanticsLabel: 'A red up arrow'
);
}
else {
return NetworkImage(...)
}
}
String getFileExtension(String fileName) {
return "." + fileName.split('.').last;
}

Related

Developing for Alexa, how do I display an mp4 video from a private S3 bucket ? (I can do it with mp3 audio using /Utils, but not with the video.)

When developing on the Alexa, using:
var audioUrl = Util.getS3PreSignedUrl("Media/001.mp3").replace(/&/g,'&');`
I can play an mp3 audio clip using an SSML tag and yet keep the mp3 private, by storing it in an Alexa-hosted S3 bucket (and locking down the permissions to it):
output1+=audio001[currentTrack]+<audio src="${audioUrl1}"/>+moreInstructions; // AND
return handlerInput.responseBuilder
.speak(output1)
.reprompt(moreInstructions)
.getResponse();
However, I can't seem to follow the same approach for an mp4 / video format. It seems that using Alexa Presentation Language (APL), you have to store your videos publically on the internet. I have tried to use the pre-signed Utility function for the Mp4 video, but it doesn't seem to work ...
I tried the following:
const Alexa = require('ask-sdk-core');
const Util = require('./util.js');
const DOCUMENT_ID = "VideoDocument";
var videoUrl = Util.getS3PreSignedUrl("Media/safari.mp4").replace(/&/g,'&');
const datasource = {
"videoPlayerTemplateData": {
"type": "object",
"properties": {
"backgroundImage": "https://d2o906d8ln7ui1.cloudfront.net/images/response_builder/background-green.png",
"displayFullscreen": true,
"headerTitle": "xxx",
"headerSubtitle": "xxx",
"logoUrl": "xxx,
"videoControlType": "skip",
"videoSources": [
// "https://d2o906d8ln7ui1.cloudfront.net/videos/AdobeStock_277864451.mov",
"https://d2o906d8ln7ui1.cloudfront.net/videos/AdobeStock_292807382.mov",
videoUrl
],
"sliderType": "determinate"
}
}
};
const LaunchRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
},
handle(handlerInput) {
const speakOutput = 'Welcome, you can say Hello or Help. Which would you like to try?';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
const createDirectivePayload = (aplDocumentId, dataSources = {}, tokenId = "documentToken") => {
return {
type: "Alexa.Presentation.APL.RenderDocument",
token: tokenId,
document: {
type: "Link",
src: "doc://alexa/apl/documents/" + aplDocumentId
},
datasources: dataSources
}
};
const SampleAPLRequestHandler = {
canHandle(handlerInput) {
// handle named intent
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'HelloWorldIntent';
},
handle(handlerInput) {
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
// generate the APL RenderDocument directive that will be returned from your skill
const aplDirective = createDirectivePayload(DOCUMENT_ID, datasource);
// add the RenderDocument directive to the responseBuilder
handlerInput.responseBuilder.addDirective(aplDirective);
}
// send out skill response
return handlerInput.responseBuilder.getResponse();
}
};
It works if videoURL is set to a string or a URL of a publically hosted mp4 video, but not if it points to the pre-signed URL for an mp4 video from an S3 bucket.
The .replace(/&/g,'&') is only used for URLs which will be included in SSML, since this is an XML syntax within a json, so two parsers apply and will correct this when they are invoked inside each other.
If you get a file URL directly in json (what APL is), then you just can use var videoUrl = Util.getS3PreSignedUrl("Media/safari.mp4"); and it should work.

What is the name of the element that github copilot uses to highlighting text?

I would like to make a control similar to the used by github copilot. I mean highlighting the proposed text. Live share extension uses a very similar approach. What is the name of this control?
Control in live preview extension:
Control in copilot extension:
I guess it could be TextEditorDecorationType? However, I do not know how to style it so that the author is absolutely positioned :/
You can create a similar experience using Text Editor Decorators. These decorators allow you to use custom style patterns for any text in a document (including foreground and background colors).
The text highlighting examples that you have visualized above, are simply adding a a background color to a span of text that has been selected by a user, or suggested by an extension.
As an example: if you wanted to add custom highlighting for console.log:
Then you could use the following:
import * as vscode from 'vscode'
const decorationType = vscode.window.createTextEditorDecorationType({
backgroundColor: 'green',
border: '2px solid white',
})
export function activate(context: vscode.ExtensionContext) {
vscode.workspace.onWillSaveTextDocument(event => {
const openEditor = vscode.window.visibleTextEditors.filter(
editor => editor.document.uri === event.document.uri
)[0]
decorate(openEditor)
})
}
function decorate(editor: vscode.TextEditor) {
let sourceCode = editor.document.getText()
let regex = /(console\.log)/
let decorationsArray: vscode.DecorationOptions[] = []
const sourceCodeArr = sourceCode.split('\n')
for (let line = 0; line < sourceCodeArr.length; line++) {
let match = sourceCodeArr[line].match(regex)
if (match !== null && match.index !== undefined) {
let range = new vscode.Range(
new vscode.Position(line, match.index),
new vscode.Position(line, match.index + match[1].length)
)
let decoration = { range }
decorationsArray.push(decoration)
}
}
editor.setDecorations(decorationType, decorationsArray)
}
Reference Link

How can i convert a https://graph.microsoft.com/v1.0/me/photo/$value response with Angular 9 to an image

How can i convert a https://graph.microsoft.com/v1.0/me/photo/$value response with Angular9 to an image
enter image description here
because the image returned is a binary representation of the image, you would need to convert it to read it.
Here's an example of it for angular.
var imageUrl = 'data:image/*;base64,' + res.data;
This project is an example of how to use graph with angular which displays the users information. the link goes directly to the little section about the image conversion.
https://github.com/OfficeDev/O365-Angular-Microsoft-Graph-Profile/blob/7ed7e89a03525fa79b9d6bed7fb17d257a4c9ff2/app/controllers/mainController.js#L120
getProfileImg() {
this.http
.get('https://graph.microsoft.com/v1.0/me/photos/48x48/$value', {
headers: { 'Content-Type': 'image/*' },
responseType: 'arraybuffer',
})
.toPromise()
.then(
(data) => {
const TYPED_ARRAY = new Uint8Array(data);
// converts the typed array to string of characters
const STRING_CHAR = String.fromCharCode.apply(null, TYPED_ARRAY);
//converts string of characters to base64String
let base64String = btoa(STRING_CHAR);
//sanitize the url that is passed as a value to image src attrtibute
this.profileImg = this.sanitizer.bypassSecurityTrustUrl(
'data:image/*;base64, ' + base64String
);
console.log(this.profileImg);
},
(err) => {
this.profileImg = '../../assets/img/account_circle-black-48dp.svg';
}
);
}
This worked for me.
as the user is already logged in, i Just used the getProfileImg() and was able to get the image from AD Profile.
I applied Css as per my image need and that worked.
Thank you!

How to combine unicode characters in Flutter?

I need to display the combining overline character (unicode U+0305) over some other characters, like '2' or 'x'.
https://www.fileformat.info/info/unicode/char/0305/index.htm
Is there a way to accomplish this in Dart?
Thanks in advance.
You can combine by placing the unicode right after the letter:
String overlined = 'O\u{0305}V\u{0305}E\u{0305}R\u{0305}';
print(overlined); // Output: O̅V̅E̅R̅
A more dynamic version (with simplistic logic) would be:
void main() {
String overlined = overline('I AM AN OVERLINED TEXT');
print(overlined); // Output: I̅ A̅M̅ A̅N̅ O̅V̅E̅R̅L̅I̅N̅E̅D̅ T̅E̅X̅T̅
}
String overline(String text) {
return text.split('').map((String char) {
if (char.trim().isEmpty)
return char;
else
return '$char\u{0305}';
}).join();
}
However, this is pretty much limited. A better approach would be using the style property of Flutter's Text to do so:
const Text(
'OVER',
style: TextStyle(decoration: TextDecoration.overline),
);

determine the size of placeholder to match the actual size of the image

I'm trying to load images from internet but i want to show a placeholder till load finishes.
How can i determine the size of placeholder to match the actual size of the image?
The width and height of the image from the api are 2000, 3000 respectively. How can I convert these values?
There are a few parts to the question,
First, you should download an image the from the web through one of many methods.
https://docs.flutter.io/flutter/widgets/FutureBuilder-class.html
https://docs.flutter.io/flutter/widgets/Image/Image.network.html
get the bytes and convert to an image with this https://docs.flutter.io/flutter/widgets/Image/Image.memory.html
Here are the widgets you should use.
You should use this widget as a sized placeholder.
https://docs.flutter.io/flutter/widgets/SizedBox-class.html
Here is a possible child to show a loading animation
https://docs.flutter.io/flutter/material/CircularProgressIndicator-class.html
To display it, you can either complete the FutureBuilder or
Image im = null;
Map imageDimensions = null;
initState(){
fetchImageDimension.fetch().then((imageD) => setState(){imageDimensions = imageD;})
somefuntion.fetch().then((image) => setState(){im = image;})
}
Widget build(buildContext context){
var width = imageDimensions["width"];
var height = imageDimensions["height"];
var placeholderColor = imageDimensions["color"];
var dimensionsLoaded = imageDimensions != null;
var imageLoaded = im != null;
return if(imageLoaded)
Image.....
else {
if(dimensionsLoaded) {
SizedBox(
width: width,
height: height,
child: const Card(child: Container(color: Color(placeholderColor)),
)
} else {
//neigher image or dimensions are Loaded. Nothing you can do
}
}
}
}
This code is not meant to be compiled but it is meant to show the possible options to answer the question.