Need to configure liquidsoap for transcoding - ubuntu-16.04

I'm trying to transcode stream with liquid and output to icecast2
Below is my config taken from official website http://savonet.sourceforge.net/doc-svn/cookbook.html
# Input the stream,
# from an Icecast server or any other source
url = "http://www.protonradio.com:8000/schedule.m3u"
input = mksafe(input.http(url))
# First transcoder: MP3 32 kbps
# We also degrade the samplerate, and encode in mono
# Accordingly, a mono conversion is performed on the input stream
output.icecast(
%mp3(bitrate=32, samplerate=22050, stereo=false),
mount="/your-stream-32.mp3",
host="streaming.example.com", port=8000, password="xxx",
mean(input))
When I try to run it with ./radio.liq
I get this error:
root#Ubuntu:/etc/liquidsoap# ./radio.liq
./radio.liq: line 4: url: command not found
./radio.liq: line 5: syntax error near unexpected token `('
./radio.liq: line 5: `input = mksafe(input.http(url))'
root#Ubuntu:/etc/liquidsoap#
Here's what happens when I run with this command:
root#Ubuntu:/etc/liquidsoap# liquidsoap radio2.liq
init: security exit, root euid (user).
root#Ubuntu:/etc/liquidsoap#
Buffer errors with this stream url http://46.21.106.168:80
2016/09/30 15:57:17 [http_4756:3] Buffer overrun: Dropping 0.03s.
2016/09/30 15:57:20 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:57:26 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:57:37 [http_4756:3] Buffer overrun: Dropping 0.01s.
2016/09/30 15:57:44 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:58:11 [http_4756:3] Buffer overrun: Dropping 0.00s.
2016/09/30 15:58:47 [http_4756:3] Buffer overrun: Dropping 0.00s.

You should start liquidsoap interpreter and feed your script to it like this:
liquidsoap radio.liq
In your example you start the script from the command line directly and it goes to shell (bash), not liquidsoap.

To add on to the comments of Alexeys answer.
Your script did not run because you did not tell the script what application to run it with. On windows the file extension (.exe or .txt or .doc) is used to distinguish which application to open. On Unix the first line known as the "sha bang" tells the OS which application must run this file.
So if you first check where your liquidsoap is installed with:
which liquidsoap
Then add the response path to your scripts first line like so:
#!/usr/bin/liquidsoap
Unix will now know which application to open it with.
On your second observation(not issue), the Overrun is seen because you are fetching a stream from a Icecast server using http. The Icecast sever is generally configured to "burst" some data to you on connection (normally for players to fill their buffers). Anyhow, the buffer used in the input.http command is too small for this sudden burst of data and therefore liquidsoap throws a overflow exception. To fix this, increase the max buffer.
input = mksafe(input.http(url,buffer=2.,max=120.))

Related

Stm32CubeIde data mismatch

I am trying work with the demo of touchgfx designer with my stm32g071rb and the x-nucleo-gfx01m2.
The example project works fine if I flash it via TouchGFX designer.
When I import it in stm32cubeide, after setting up the external loader found in the touchgfx directory, i get the following error:
Erasing memory corresponding to segment 0:
Erasing internal memory sectors [0 34]
Erasing memory corresponding to segment 1:
Erasing external memory sector 0
Download in Progress:
File download complete
Time elapsed during download operation: 00:00:02.135
Verifying ...
Error: Data mismatch found at address 0x90000000 (byte = 0x00 instead of 0xD2)
Error: Download verification failed
Shutting down...
Exit.
I've also tried to use the stm32programmer but i get the same error.
How can i fix this?

Text to speech - not getting audio in the .wav (connection refused)

I run a flask server in which this function is called whenever an specific action occurs on the page :
def generate_audio(text, target):
# create the path
tmp_dir = os.path.join(os.getcwd(), "app/data/audio")
if not os.path.exists(tmp_dir):
os.mkdir(tmp_dir)
path = os.path.join(tmp_dir, f'{target}-{text}.wav')
# query the API
speech_config = SpeechConfig(
subscription=cfg['speech']['key'], region=cfg['speech']['location'])
audio_config = AudioOutputConfig(filename=path)
synthesizer = SpeechSynthesizer(
speech_config=speech_config, audio_config=audio_config)
synthesizer.speak_text("A simple test")
At the end of the execution, the file containing the audio is just an empty 0B file. I literally copy pasted the quick start guide, so I do not know what is wrong.
What I did try is to change the subscription key to something random and no error was raised. In the logs from the azure service webpage nothing comes up either.
Here's the cancellation details
SpeechSynthesisCancellationDetails(reason=CancellationReason.Error, error_details="Connection failed (no connection to the remote host). Internal error: 11. Error details: Code: 0. USP state: 2. Received audio size: 0 bytes.")
Here's the log
https://pastebin.com/aapsMXYc
I was also facing the same issue and found that I was entering incorrect location name. E.g. in resource, you will see a location name like Central India but in SDK it should be entered as centralindia (this name I found under key management). Hope this will help in resolving this issue.
Thanks

Unable to connect to the NetBeans Distribution because of Zero sized file

I recently reinstalled Netbeans IDE on my Windows 10 PC in order to restore some unrelated configurations. When I tried checking for new plugins in order to be able to download the Sakila sample database,
I get this error.
I've tested the connection on both No Proxy and Use Proxy Settings, and both connection tests seem to end succesfully.
I have allowed Netbeans through my firewall, but this has changed nothing either.
I haven't touched my proxy configuration, so it's on default (autodetect). Switching the autodetect off doesn't change anything, either, no matter what proxy config i have on Netbeans.
Here's part of my log file that might be helpful:
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 32,0MB maximum 910,5MB
Non heap memory usage: initial 2,4MB maximum -1b
Garbage collector: PS Scavenge (Collections=12 Total time spent=0s)
Garbage collector: PS MarkSweep (Collections=3 Total time spent=0s)
Classes: loaded=6377 total loaded=6377 unloaded 0
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17.130.041.344
INFO [org.netbeans.modules.autoupdate.updateprovider.DownloadListener]: Connection content length was 0 bytes (read 0bytes), expected file size can`t be that size - likely server with file at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b is temporary down
INFO [org.netbeans.modules.autoupdate.ui.Utilities]: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.doCopy(DownloadListener.java:155)
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.streamOpened(DownloadListener.java:78)
at org.netbeans.modules.autoupdate.updateprovider.NetworkAccess$Task$1.run(NetworkAccess.java:111)
Caused: java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.notifyException(DownloadListener.java:103)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.copy(AutoupdateCatalogCache.java:246)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.writeCatalogToCache(AutoupdateCatalogCache.java:99)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogProvider.refresh(AutoupdateCatalogProvider.java:154)
at org.netbeans.modules.autoupdate.services.UpdateUnitProviderImpl.refresh(UpdateUnitProviderImpl.java:180)
at org.netbeans.api.autoupdate.UpdateUnitProvider.refresh(UpdateUnitProvider.java:196)
[catch] at org.netbeans.modules.autoupdate.ui.Utilities.tryRefreshProviders(Utilities.java:433)
at org.netbeans.modules.autoupdate.ui.Utilities.doRefreshProviders(Utilities.java:411)
at org.netbeans.modules.autoupdate.ui.Utilities.presentRefreshProviders(Utilities.java:405)
at org.netbeans.modules.autoupdate.ui.UnitTab$14.run(UnitTab.java:806)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1423)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2033)
It might be that the update server is down just right now; i haven't been able to test this either. But it also might be something wrong with my configurations. I'm going crazy!!1!
Something that worked for me was changing the "http:" to "https:" in the update urls.
I.E. Change "http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
to "https://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
No idea why that makes it work on my end. I'm running Linux Mint 19.1.

Problems with Chronicle Map on Windows

I am trying to use ChronicleMap for my index structure, this seems to work fine on Linux but when I am running my JUnit test on Windows (which is my development environment), I keep getting an error: java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute.
Here's the code snippet that is problematic:
File file = new File(idxFullPath);
ChronicleMap<Integer, int[]> idx =
ChronicleMapBuilder.of(Integer.class, int[].class)
.averageValue(getSampleIdxList())
.entries(IDX_MAX_SIZE)
.createPersistedTo(file);
The following exception is thrown:
[2016-06-17 14:32:47.779] ERROR main com.mcm.op.persistence.Persistence ERR java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute
at net.openhft.chronicle.map.ChronicleMapBuilder.waitUntilReady(ChronicleMapBuilder.java:1520)
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1583)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1444)
at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1405)
at com.mcm.op.persistence.Persistence.initIdx(Persistence.java:131)
at com.mcm.op.persistence.Persistence.init(Persistence.java:177)
at com.mcm.op.persistence.PersistenceTest.initPersist(PersistenceTest.java:47)
at com.mcm.op.persistence.PersistenceTest.setUp(PersistenceTest.java:29)
Indeed, it is likely that the process which created the file has crashed, or stopped terminated debugging, or something like that.
If it's ok to have a fresh index from unit test-to-test runs, I recommend to try either delete the file at idxFullPath before creating a Chronicle Map, or randomize the mapping file via something like File.createTempFile(). In either case File.deleteOnExit() could appear to be helpful.
If you want to keep the index between unit test runs and always use the same file at idxFullPath for persistence, you could try to use builder.createOrRecoverPersistedTo() instead of plain createPersistedTo() map creation method. However this might slow down the map creation.

Error: No chunks found for a file with Mongo gridFS

An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.