(FB Instant games) How I can get/set screen resolution? - facebook

How can I get/set screen resolution for instant games in mobile facebook app or mesenger? I tried getting that by "window.screen.width" and "window.innerWidth", it returned me 360 pixels (but i have 980 in Chrome browser). I haven’t used any meta tags which can limit the resolution.

You need to use window.innerWidth and window.innerHeight.
You need to multiply these by window.devicePixelRatio.
So:
PixelW = window.innerWidth * window.devicePixelRatio;
PixelH = window.innerHeight * window.devicePixelRatio;

Related

Unity Agora screenshare blurry video quality

How to improve the image quality while sharing the screen using agora sdk with unity. I've used below
settings for VideoProfile as
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
// Sets the video encoding bitrate (Kbps).
minBitrate = 100,
bitrate = 1130,
// Sets the video frame rate.
minFrameRate = 10,
frameRate = FRAME_RATE.FRAME_RATE_FPS_24,
// Sets the video resolution.
dimensions = new VideoDimensions() { width = EncodeWidth, height = EncodeHeight },
// Sets the video encoding degradation preference under limited bandwidth. MIANTAIN_QUALITY means to degrade the frame rate to maintain the video quality.
degradationPreference = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY,
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED,
// Sets the video orientation mode of the video
orientationMode = ORIENTATION_MODE.ORIENTATION_MODE_FIXED_PORTRAIT
});
the Output from Editor to Device looks like below:
And Ouput from Device to Editor or another Deivce looks blurry as below:
Ive tested with WIFI on both device and ensure with good quality and also with forced settings as Image Quality than Frame Rate.
mRtcEngine.SetVideoQualityParameters(false);
mRtcEngine.EnableDualStreamMode(false);
mRtcEngine.SetRemoteDefaultVideoStreamType(REMOTE_VIDEO_STREAM_TYPE.REMOTE_VIDEO_STREAM_HIGH);
Do i missed anything else to improve the image quality?
How to share a Rect part of the screen and this rect can be draggable by user at part of the screen
Blurry videos may be caused by low bitrates and resolution ratios. Check the following:
Check videoProfile. If possible, set videoProfile to a higher level
to see whether the video is clearer.
Check the stream type of the receiver. If the stream type is low, call the setRemoteVideoStreamType method to switch from a low stream to high stream. (You did this)
Switch to another WiFi network to ensure that the blurry video is not caused by poor Internet connections.
Turn off all pre-processing options.
If this issue persists, contact Agora customer support (via ticket system) with the following information:
The uid of the user who sees the blurry video.
The time frame during which the blurry video appears.
SDK logs and screen recording files of the user.
You can check the statistics of every call in Agora Analytics in Dashboard.

Bad video quality while using custom video source in Unity

I try to use SetExternalVideoSource and PushVideoFrame to send custom video frames to the RTC engine with methods described here. However, the video quality is not as good as using the default video streaming options despite that I am pushing video frames with the same resolution. Does anyone notice this difference before? I wonder if this is expected? Or maybe there is a way to set the custom video quality, but I overlooked?
Hope the slack channel answer helped you on this. For others reading this, please see what the discussion was:
"You should give a resolution configuration. Here is the config I use in my advanced demo app:"
mRtcEngine.SetVideoEncoderConfiguration(new VideoEncoderConfiguration()
{
bitrate = 1130,
frameRate = FRAME_RATE.FRAME_RATE_FPS_15,
dimensions = new VideoDimensions() { width = Screen.width, height = Screen.height },
// Note if your remote user video surface to set to flip Horizontal, then we should flip it before sending
mirrorMode = VIDEO_MIRROR_MODE_TYPE.VIDEO_MIRROR_MODE_ENABLED
});

How can I submit scores with one decimal to Game Center?

I want to send my score with one decimal to Game Center. Ho can I do this??
You can only submit 64 bit integers as scores to a leaderboard. From the documentation:
To Game Center, a score is just a
64-bit integer value reported by your
application. You are free to decide
what a score means, and how your
application calculates it. When you
are ready to add the leaderboard to
your application, you configure
leaderboards on iTunes Connect to tell
Game Center how a score should be
formatted and displayed to the player.
Further, you provide localized strings
so that the scores can be displayed
correctly in different languages. A
key advantage of configuring
leaderboards in iTunes Connect is that
the Game Center application can show
your game’s scores without you having
to write any code.
That doc page should tell you about formatting your score. It sounds like in order to display float-like scores you will have to tinker with the format settings in iTunes Connect.
Try this :
- (IBAction)setScore
{
float score = (float)self.currentScore / 100.0f;
currentScoreLabel.text = [NSString stringWithFormat: #"%f", score];
NSLog(#"%lld", self.currentScore);
}

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?
Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
const audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
// Note if you are using music from the same domain
// **YOU MUST REMOVE THIS LINE** or your server must give permission.
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to draw something creative.
note some troubles you'll likely run into.
At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little
On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.
As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.
To try to explain this part
The default is you have permission to access all data from the same domain but no permission from other domains.
When you add
audio.crossOrigin = "anonymous";
That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.
music: DOCTOR VOX - Level Up
By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).
Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):
I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.
My waveform demo with HMTL5 Canvas:
http://www.momentumracer.com/electriccanvas/
and WebGL:
http://www.momentumracer.com/electricwebgl/
Sources:
https://github.com/pepez/Electric-Canvas
Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.

Can I determine FLV dimensions using FlowPlayer?

I'm currently integrating a custom Flash Video player plugin into a .Net CMS. The plugin editor currently requires the user to provide the video's width and height in order for my code to push out the relevant dimensions to the FlowPlayer.
I was wondering, is there a way to automatically determine the FLV width and height rather than having the editor having to provide this information each time? Ideally, I'd prefer it if the user simply had to provide the FLV location and let the new plugin automatically provide the width / height to the FlowPlayer.
Add that into Your flowplayer's configuration:
clip: {
onMetaData: function(clip) {
var width = parseInt(clip.metaData.width, 100);
var height = parseInt(clip.metaData.height, 100);
$(this.getParent()).css({ width: width, height: height });
}
},
Event 'metadata' is called after file's meta information is loaded, thus width and height.
In the example above I change size of a player according to movie's dimentions