Encode and mux GStreamer pipeline into MPEG-TS - encoding

I'm trying to encode and mux GESPipeline into MPEG-TS for streaming over UDP.
The pipeline plays fine on screen in preview mode.
My attempt, essentially:
GstEncodingContainerProfile *prof;
GstCaps *caps;
caps = gst_caps_from_string("video/mpegts");
prof = gst_encoding_container_profile_new("test-app-profile", NULL, caps, NULL);
caps = gst_caps_from_string("video/x-h264");
gst_encoding_container_profile_add_profile(prof,
(GstEncodingProfile*) gst_encoding_video_profile_new(caps, NULL, NULL, 0));
caps = gst_caps_from_string("audio/x-ac3");
gst_encoding_container_profile_add_profile(prof,
(GstEncodingProfile*) gst_encoding_audio_profile_new(caps, NULL, NULL, 0));
// this fails:
ges_pipeline_set_render_settings (pl, "file:///path/out.ts", prof);
In output with GST_DEBUG=3:
encodebin gstencodebin.c:1976:create_elements_and_pads: error: No available muxer for format video/mpegts
Update:
more detailed debug reveals that it actually looks into mpegtsmux, but skips it. Why?
Relevant messages:
gst_encode_bin_setup_profile: Setting up profile 0x557c3c98c460:test-app-profile (type:container)
create_elements_and_pads: Current profile : test-app-profile
_get_muxer: Getting list of muxers for format video/mpegts
gst_element_factory_list_filter: finding factories
...
gst_element_factory_list_filter: Trying mpegtsmux
gst_structure_parse_field: trying field name 'systemstream'
_priv_gst_value_parse_value: trying type name 'boolean'
gst_structure_parse_field: trying field name 'packetsize'
_priv_gst_value_parse_value: trying type name 'int'
... tries other muxers ...
If I change video/mpegts to video/x-matroska, mkv file is produced (though ugly and without sound).
How to encode into mpegts?

The problem was missing fields listed in src caps in gst-inspect-1.0 mpegtsmux.
Those are required and if you don't specify them it won't match the muxer.
Solution for mpegtsmux:
gst_caps_from_string("video/mpegts, systemstream=true, packetsize=188");
Thanks to Freenode #gstreamer IRC channel.

Related

Setting Default Value to 'None' for Parameter (Azure Data Factory)

I am currently trying to parametrize a dataset so that the compression type of a binary file can be set without creating a new dataset.
The issue that I am having is that I cannot seem to make the compression type default to 'None' while still having a parameter. I have tried typing in null, '', etc. but nothing seems to work. The pipeline will either not run, or it returns an error that it cannot be null, invalid type "" etc. Any advice would be appreciated.
--Update
It so appears this is something to do with how Binary dataset is designed.
Going through the Binary Dataset code I can see the below difference in usage of compression method/code block.
It indeed leads to an error for default value set to None.
And when compression type is set to None
You can reach out to support for official response or log an issue here or share an idea here
Also, it works fine when using the Default None option from the dropdown for compression properties in case of Binary dataset.
Checkout..
However...
I have tried this at both source and sink with csv files - Source.csv and Source.csv.gz both ways
You can set the parameter default value to None. And it works just fine.
Same is expected for Binary format Dataset properties but in vain 😕
Not an obvious solution, but you can add a parameter named "CompressionType" to your dataset and then edit the dataset json to add this under "typeProperties":
"#if(equals(dataset().CompressionType,'None'),'no_compression','compression')": {
"type": "#dataset().CompressionType"
}

Using pysnmp for Juniper OIDs (with octets)

The Juniper knowledge base says that you can hit jnxOperatingCPU.x.x.x.x to get the memory usage from the device, and the x.x.x.x are "the last 4 octets", in my case "9.1.0.0".
I don't seem to be able to get results like this using pysnmp's getCmd() method. I have the JUNIPER-MIB in place, but the script returns:
No symbol JUNIPER-MIB::jnxOperatingCPU.9.1.0.0 at < pysnmp.smi.builder.MibBuilder object at 0x198b810>
I have another SNMP monitoring tool in place that can reach this OID, so I know it's valid on this device. I can also use the full numeric OID to get the value, but I'd rather have the pretty name.
Might anyone have an example of using such an OID with pysnmp.hlapi?
From the error message it looks like you are using the ObjectIdentity class incorrectly (pasting code snippet would be helpful though).
According to the JUNIPER-MIB the jnxOperatingCPU object belongs to the jnxOperatingTable table which has these indices:
jnxOperatingEntry OBJECT-TYPE
SYNTAX JnxOperatingEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"An entry of operating status table."
INDEX { jnxOperatingContentsIndex,
jnxOperatingL1Index,
jnxOperatingL2Index,
jnxOperatingL3Index }
::= { jnxOperatingTable 1 }
All four indices are of type Integer32.
Therefore try this:
ObjectIdentity('JUNIPER-MIB', 'jnxOperatingCPU', 9, 1, 0, 0)
Here is the documentation on the ObjectIdentity class.

How to make Windows' Bonjour resolve foo.bar.local subdomains created by Avahi

Why can't Windows' Bonjour (the Apple one) automatically resolve foo.bar.local, when Ubuntu and macOS can?
foo.local instead is resolved without issues by every OS.
Here's my avahi-daemon.conf:
[server]
host-name=foo
domain-name=bar.local
...
This discussion mentions that Windows' Bonjour implementation does not support aliases, is this the culprit?
How does this tool differ from my solution?
EDIT: I don't want to set an alias. foo.bar.local is different from bar.local.
I just want to have different hostnames under the same "domain".
For example, foo.bar.local is 192.168.0.8 while foo1.bar.local is 192.168.0.9.
I won't have foo.local, bar.local and foo.bar.local all in the same network. I will use foo.bar.local, with only foo varying (*.bar.local)
From my current findings, this seems to be intentional. Excerpt from the source code (mDNSResponder-878.30.4, function NSPLookupServiceBegin in mdnsNSP.c):
// <rdar://problem/4050633>
// Don't resolve multi-label name
// <rdar://problem/5914160> Eliminate use of GetNextLabel in mdnsNSP
// Add checks for GetNextLabel returning NULL, individual labels being greater than
// 64 bytes, and the number of labels being greater than MAX_LABELS
replyDomain = translated;
while (replyDomain && *replyDomain && labels < MAX_LABELS)
{
label[labels++] = replyDomain;
replyDomain = GetNextLabel(replyDomain, text);
}
require_action( labels == 2, exit, err = WSASERVICE_NOT_FOUND );
It returns an error if the name has more than two labels, as in the case of foo.bar.local.
In my tests, I just removed the last line. With the new build, it resolved names with multiple labels successfully. I did not yet encounter any side-effects so far.
Has anybody an idea about the intention behind not resolving multi-label names?

How to determine string encoding in cocoa?

How to determine string encoding in cocoa?
Recently I'm working on a radio player.Sometimes id3 tag text was garbled.
Here is my code:
CFDictionaryRef audioInfoDictionary;
UInt32 size = sizeof(audioInfoDictionary);
result = AudioFileGetProperty(fileID, kAudioFilePropertyInfoDictionary, &size, &audioInfoDictionary);
ID3 info are in audioInfoDictionary. Sometimes the id3 doesn't use utf8 encoding, and title, artist name were garbled.
Is there any way to determine what encoding a string use?
Special thx!
While it's an NSString object, there's no specific encoding since it's guaranteed to represent whatever is put into it using the encoding determined when it was created. See the Working With Encodings section of the docs.
From where are you getting the ID3 tags? The time you "receive" this information is the best time to determine its encoding. See Creating and Initializing Strings and the next few sections (for file and url creation) for a list of initializers. Some of them let you set the encoding and others pass back (by reference) the "best guess" encoding the system determined when creating the string. Look for methods with "usedEncoding:" for the system's reported guess.
All of this really depends on exactly what is handing you that string. Are you reading it from a file (an MP3) or a web service (Internet Radio)? If the latter, the server's response should include the encoding and if that's wrong, there's not much to do but guess.

Incorrect partition detected by PartitionScanner in custom TextEditor for eclipse

I have a PartitionScanner that extends RuleBasedPartitionScanner in my custom text editor plugin for Eclipse. I am having issues with the partition scanner detecting character sequences within larger strings, resulting in document being partitioned incorrectly. For example, within the constructor of m partition scanner I have following rule set-up:
public MyPartitionScanner() {
...
rules.add(new MultiLineRule("SET", "ENDSET", mytoken));
...
}
However, if I happen to use a token that contains the character sequence "SET," it seems like partition scanner would continue searching for endSequence("ENDSET") and will make the rest of the document as single partition set to "mytoken."
var myRESULTSET34 = ...
Is there a way to make the partition scanner ignore the word "SET" from the token above? And only recognize the whole word "SET"?
Thank you.
Using the MultilineRule as is, you won't be able to differentiate. But you can create your own subclass that overrides sequenceDetected and does a lookback/lookahead when the super impl returns true, to make sure that it's preceded/followed by EOF/whitespace. If it doesn't, then push back the characters onto the scanner and return false.