Improve Uploading Avatar Time - xmpp

I,m using Smack to upload avatar. It takes long time and most of that time it times out (sometimes even 2min is not enough). Is there a way I can improve on that? Is there any other way to quickly upload avatar?
I know I can have just my own http service serving avatars, but I'm not willing to go that route right now. Fetching VCard avatar is very quick.
I use Smack 4.3.0 and Smack Logs are found here: https://pastebin.com/dQbSEpmJ
Here is the code I use:
fun setPhoto(path: String) = viewModelScope.launch(Dispatchers.IO) {
try {
val file = File(path)
val vCardMgr = VCardManager.getInstanceFor(connection)
val vCard = vCardMgr.loadVCard()
vCard.setAvatar(Base64.encodeToString(file.readBytes(), Base64.DEFAULT), FileUtils.getMimeType(path))
vCardMgr.saveVCard(vCard)
} catch (e: Exception) {
launch(Dispatchers.Main){
Toast.makeText(chatApp.applicationContext, e.message, Toast.LENGTH_LONG).show()
}
}
}

I found that testing with openfire, the file size was so bing which in turn caused the Stanza to be so hughe that it crashed the server. That was confirmed by Guus (Ignite Realtime guy) and here I quote him:
Openfire has a (configurable) maximum stanza size limit. I think it’s
on 2MB. Note that when you base64 encode binary data, the encoded
result will be a lot larger than the unencoded original. I suggest
that you reduce the image size in your vcard, or use another mechanism
to exchange the data.
So compressing the image solved the issue

Related

How do you get a bytes object from Google Cloud Storage Bucket

My question at Github
https://github.com/googleapis/python-speech/issues/52
has been active for 9 days and the only two people to have attempted an answer have both failed but now I think it might be possible for someone to answer it who understands how Google Cloud Buckets work even though they do not understand how Google's Speech Api works. In order to convert long audio files to text they first must be uploaded to the Cloud. I was using some syntax that now appears to be broken and the following syntax might work except that Google does not explain how to use this code in coordination with files uploaded to the Cloud. So in the code below published here:
https://cloud.google.com/speech-to-text/docs/async-recognize#speech_transcribe_async-python
The content object has to be located on the cloud and it needs to be a bytes object. Suppose the address of the object is: gs://audio_files/cool_audio
What syntax would I use such that the content object refers to a bytes object?
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code='en-US')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=90)
My previous answer didn't really address your question. Let me try again:
Please try this:
audio = types.RecognitionAudio(content=bytes(content, 'utf-8'))
GCS stores objects as a sequence of bytes. If your object has a Content-Encoding header that can cause the content to be transformed while downloading (e.g., gzip content will be uncompressed if the client doesn't supply an Accept-Encoding: gzip header); and if it has a Content-Type header the client application or library may treat the information differently.

Play! Framework 2.6! Gzip Filter if the response size is greater than 50bytes

I am currently working with Play! Framework 2.6. I am looking into gzipping my response if they are greater than 80bytes. However, With the Framework there is no way to perform this. Based on this Documentation I can make use of the ff code snippet
new GzipFilter(shouldGzip = (request, response) =>
response.body.contentType.exists(_.startsWith("text/html")))
However it did not specify on where would I create this. Any idea how I can specify if it should a gzip a certain response if its greater than 50bytes?
By default, the response bodies are streamed which means you do not know how big the size of the response body will be.
If you know the size of the response body already (e.g. you're serving a file from Amazon S3 already know the file size) You can set the Content-Length header and check it in GzipFilter.
You will also likely need to implement your own GzipFilter and adapt it so it checks the Content-Length.

PlayWS calculate the size of a http call without consuming the stream

I'm currently using the PlayWS http client which returns an Akka stream. From my understanding, I can consume the stream and turn it into a Byte[] to calculate the size. However, this also consumes the stream and I can't use it anymore. Anyway around this?
I think there are two different aspects related to the question.
You want to know the size of the server response in advance to prepare buffer. Unfortunately there is no guaranteed way to do this. HTTP 1.1 spec explicitly allows transfer mode when the server does not know the size of the response in advance via chunked transfer encoding. See also quote from 3.3.1. Transfer-Encoding:
A recipient MUST be able to parse the chunked transfer coding
(Section 4.1) because it plays a crucial role in framing messages
when the payload body size is not known in advance.
Section 3.3.3. Message Body Length specifies how length of a message body is defined and it besides the aforementioned chunked transfer encoding it also contains quite unhelpful
Otherwise, this is a response message without a declared message
body length, so the message body length is determined by the
number of octets received prior to the server closing the
connection.
This is added for backward compatibility and discouraged from usage but is still legally allowed.
Still in many real world scenarios you can use Content-Length header field that the server may return. However there is a catch here as well: if gzip Content-Encoding is used, then Content-Length will contain size of the compressed body.
To sum up: in general case you can't get the size of the message body in advance before you fully get the server response i.e. in terms of code perform a blocking call on the response. You may try to use Content-Length and it might or might not help in your specific case.
You already have a fully downloaded response (or you are OK with blocking on your StreamedResponse) and you want to process it by first getting the size and only then processing the actual data. In such case you may first use getBodyAsBytes method which returns IndexedSeq[Byte] and thus has size, and then convert it into a new Source using Source.single which is actually exactly what the default (i.e. non-streaming) implementation of getBodyAsSource does.

Dont receive results other than those from first audio chunk

I want some level of real-time speech to text conversion. I am using the web-sockets interface with interim_results=true. However, I am receiving results for the first audio chunk only. The second,third... audio chunks that I am sending are not getting transcribed. I do know that my receiver is not blocked since I do receive the inactivity message.
json {"error": "Session timed out due to inactivity after 30 seconds."}
Please let me know if I am missing something if I need to provide more contextual information.
Just for reference this is my init json.
{
"action": "start",
"content-type":"audio/wav",
"interim_results": true,
"continuous": true,
"inactivity_timeout": 10
}
In the result that I get for the first audio chunk, the final json field is always received as false.
Also, I am using golang but that should not really matter.
EDIT:
Consider the following pseudo log
localhost-server receives first 4 seconds of binary data #lets say Binary 1
Binary 1 is sent to Watson
{interim_result_1 for first chunk}
{interim_result_2 for first chunk}
localhost-server receives last 4 seconds of binary data #lets say Binary 2
Binary 2 is sent to Watson
Send {"action": "stop"} to Watson
{interim_result_3 for first chunk}
final result for the first chunk
I am not receiving any transcription for the second chunk
Link to code
You are getting the time-out message because the service waits for you to either send more audio or send a message signalling the end of an audio submission. Are you sending that message? It's very easy:
By sending a JSON text message with the action key set to the value stop: {"action": "stop"}
By sending an empty binary message
https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/speech-to-text/websockets.shtml
Please let me know if this does not resolve your problem
This is a bit late, but I've open-sourced a Go SDK for Watson services here:
https://github.com/liviosoares/go-watson-sdk
There is some documentation about speech-to-text binding here:
https://godoc.org/github.com/liviosoares/go-watson-sdk/watson/speech_to_text
There is also an example of streaming data to the API in the _test.go file:
https://github.com/liviosoares/go-watson-sdk/blob/master/watson/speech_to_text/speech_to_text_test.go
Perhaps this can help you.
The solution to this question was to set the size header of the wav file to 0.

Play encoded audio stream with web Audio Api

I'm sending an encoded live audio stream (mp3 or ogg) over websockets and i want to play it with the web audio api.
I read and tried several things but nothing worked so far...
I always tried to do it with the decodeAudioData Method.
But this method can not handle an continuous stream.
So my last approach was this:
ws.onmessage = function (evt) {
context.decodeAudioData(evt.data, function (decodedData) {
source = context.createBufferSource();
source.buffer = decodedData;
source.start(startTime);
source.connect(context.destination);
startTime += decodedData.duration;
},
function(e) {
var test = e;
}
);
};
This works at least with mp3s but not very well. between the received chunks there is a very small break. so there is no fluid playback of the stream. I don't know what's the reason for that... maybe the decodedData.duration property is not accuracte enough or there is some kind of delay anywhere.
Anyway it's not working at all with ogg files. I can hear the first received chunk but the rest is ignored. Maybe this has something to do with missing headers?
Is there any other method in the web audio api to play an encoded live stream then decodeAudioData? I could not find anything...
Thanks for your help!
Don't do this over web sockets if you can help it. Let the browser do its job and play this over HTTP. Otherwise you must reinvent everything.
If you insist on reinventing everything for some reason, you must:
Buffer incoming data
Decode that data
Buffer decoded data
Play back your decoded PCM buffers with a script node
Handle the times when you have buffer underruns/overruns (likely by playing back silence or dropping PCM samples)
How you do each of these items depends on your specific needs and implementation, so I would recommend splitting up the question if you get stuck on any of that.