perl how to grab spesific string in paragraph style - perl

I'm new on perl and still learning by doing some case. I've some case on parsing a log with perl.
There is a data log :
Physical interface: ge-1/0/2, Unit: 101, Vlan-id: 101, Address: 10.187.132.3/27
Index: 353, SNMP ifIndex: 577, VRRP-Traps: enabled
Interface state: up, Group: 1, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.1
Dead timer: 2.715s, Master priority: 200, Master router: 10.187.132.2
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 102, Vlan-id: 102, Address: 10.187.132.35/27
Index: 354, SNMP ifIndex: 580, VRRP-Traps: enabled
Interface state: up, Group: 2, State: master, VRRP Mode: Active
Priority: 200, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.33
Advertisement Timer: 0.816s, Master router: 10.187.132.35
Virtual router uptime: 5w5d 12:54, Master router uptime: 5w5d 12:54
Virtual Mac: 00:00:5e:00:01:02
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 103, Vlan-id: 103, Address: 10.187.132.67/27
Index: 355, SNMP ifIndex: 581, VRRP-Traps: enabled
Interface state: up, Group: 3, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.65
Dead timer: 2.624s, Master priority: 200, Master router: 10.187.132.66
Virtual router uptime: 5w5d 12:54
Tracking: disabled
I curious how we can retrieve some value and store it to array. I've tried grep it but I'm confuse how to take a spesific value.
Expected Value on Array of Hashes:
$VAR1 = {
'interface' => 'ge-1/0/2.101',
'address' => '10.187.132.3/27',
'State' => 'backup'
'Master-router' => '10.187.132.2'
};
$VAR2 = {
'interface' => 'ge-1/0/2.102',
'address' => '10.187.132.35/27',
'State' => 'master'
'Master-router' => '10.187.132.35'
};
$VAR3 = {
'interface' => 'ge-1/0/2.103',
'address' => '10.187.132.67/27',
'State' => 'backup'
'Master-router' => '10.187.132.66'
};

You could use regex to split each paragraph up. Something like this might work:
/((\w|\s|-)+):\s([^,]+)/m
The matching groups would then look something like:
Match 1
1. Physical interface
2. e
3. ge-1/0/2
Match 2
1. Unit
2. t
3. 101
Match 3
1. Vlan-id
2. d
3. 101
As you can see 1. corresponds to a key whereas 3. is the corresponding value. You can store the set of pairs any way you like.
For this to work each attribute in the log would need to be comma-seperated, which the example you have listed isn't. Assuming the example you have listed is correct, you would have to adjust the regex a little to make it work. You can test it online at rubular until it works. If it is comma seperated you might just want to split each paragraph by "," and then split each result on ":".
EDIT:
It seems to me like each line is comma seperated so the methods mentioned above might work perfectly well if you use them on a single line at a time.

To parse the data:
Obtain a section from the file by reading lines into a section while the lines start with whitespace.
Split the sections at the item separators which are either commas or linebreaks
Split each item at the first colon into a key and value
Case-fold the key and store the pair into a hash
A sketch of an implementation:
my #hashes;
while (<>) {
push #hashes, {} if /\A\S/;
for my $item (split /,/) {
my ($k, $v) = split /:/, $item, 2;
$hashes[-1]{fc $k} = $v;
}
}
Then you can extract those pieces of information from the hash which you are interested in.

Since each record is a paragraph, you can have Perl read the file in those chunks by local $/ = ''; (paragraph mode). Then, use a regex to capture each value that you want within that paragraph, pair that value with a hash key, and then push a reference to that hash onto an array to form an array of hashes (AoH):
use strict;
use warnings;
use Data::Dumper;
my #arr;
local $/ = '';
while (<DATA>) {
my %hash;
( $hash{'interface'} ) = /interface:\s+([^,]+)/;
( $hash{'address'} ) = /Address:\s+(\S+)/;
( $hash{'State'} ) = /State:\s+([^,]+)/;
( $hash{'Master-router'} ) = /Master router:\s+(\S+)/;
push #arr, \%hash;
}
print Dumper \#arr;
__DATA__
Physical interface: ge-1/0/2, Unit: 101, Vlan-id: 101, Address: 10.187.132.3/27
Index: 353, SNMP ifIndex: 577, VRRP-Traps: enabled
Interface state: up, Group: 1, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.1
Dead timer: 2.715s, Master priority: 200, Master router: 10.187.132.2
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 102, Vlan-id: 102, Address: 10.187.132.35/27
Index: 354, SNMP ifIndex: 580, VRRP-Traps: enabled
Interface state: up, Group: 2, State: master, VRRP Mode: Active
Priority: 200, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.33
Advertisement Timer: 0.816s, Master router: 10.187.132.35
Virtual router uptime: 5w5d 12:54, Master router uptime: 5w5d 12:54
Virtual Mac: 00:00:5e:00:01:02
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 103, Vlan-id: 103, Address: 10.187.132.67/27
Index: 355, SNMP ifIndex: 581, VRRP-Traps: enabled
Interface state: up, Group: 3, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.65
Dead timer: 2.624s, Master priority: 200, Master router: 10.187.132.66
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Output:
$VAR1 = [
{
'Master-router' => '10.187.132.2',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.3/27',
'State' => 'backup'
},
{
'Master-router' => '10.187.132.35',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.35/27',
'State' => 'master'
},
{
'Master-router' => '10.187.132.66',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.67/27',
'State' => 'backup'
}
];
Hope this helps!

Related

How to generate outputs from PyTransitions FSM?

I am using PyTransitions to generate a simple FSM, configuring it using a yml file.
An example could be something like this:
initial: A
states:
- A
- B
- C
transitions:
- {trigger: "AtoC", source: "A", dest: "C"}
- {trigger: "CtoB", source: "C", dest: "B"}
My question is, using the yml file, how do I write in state outputs? For example, at state A turn on LED 1, state B turn on LED2, state C turn on LED1 and 2. I can't find any documentation for it in the PyTransitions page.
My question is, using the yml file, how do I write in state outputs?
There is no dedicated output slot for states but you can use transitions to trigger certain actions when a state is entered or left.
Actions can be done in callbacks. Those actions need to be implemented in the model. If you want to do most things via configurations you can customize Machine and the used State class. For instance, you could create a custom LEDState and assign a value array called led_states to it where each value represents a LED state. The array is applied to the LEDs in the after_state_change callback of the Model that is called update_leds:
from transitions import Machine, State
class LEDState(State):
ALL_OFF = [False] * 2 # number of LEDs
def __init__(self, name, on_enter=None, on_exit=None,
ignore_invalid_triggers=None, led_states=None):
# call the base class constructor without 'led_states'...
super().__init__(name, on_enter, on_exit, ignore_invalid_triggers)
# ... and assign its value to a custom property
# when 'led_states' is not passed, we assign the default
# setting 'ALL_OFF'
self.led_states = led_states if led_states is not None else self.ALL_OFF
# create a custom machine that uses 'LEDState'
# 'LEDMachine' will pass all parameters in the configuration
# dictionary to the constructor of 'LEDState'.
class LEDMachine(Machine):
state_cls = LEDState
class LEDModel:
def __init__(self, config):
self.machine = LEDMachine(model=self, **config, after_state_change='update_leds')
def update_leds(self):
print(f"---New State {self.state}---")
for idx, led in enumerate(self.machine.get_state(self.state).led_states):
print(f"Set LED {idx} {'ON' if led else 'OFF'}.")
# using a dictionary here instead of YAML
# but the process is the same
config = {
'name': 'LEDModel',
'initial': 'Off',
'states': [
{'name': 'Off'},
{'name': 'A', 'led_states': [True, False]},
{'name': 'B', 'led_states': [False, True]},
{'name': 'C', 'led_states': [True, True]}
]
}
model = LEDModel(config)
model.to_A()
# ---New State A---
# Set LED 0 ON.
# Set LED 1 OFF.
model.to_C()
# ---New State C---
# Set LED 0 ON.
# Set LED 1 ON.
This is just one way how this could be done. You could also just pass an array with indexes representing all LEDs that should be ON. We'd switch off all LEDs when exiting a state with before_state_change
ALL_OFF = []
# ...
self.machine = LEDMachine(model=self, **config, before_state_change='reset_leds', after_state_change='set_lets')
# ...
def reset_leds(self):
for led_idx in range(num_leds):
print(f"Set {led_idx} 'OFF'.")
def set_lets(self):
for led_idx in self.machine.get_state(self.state).led_states:
print(f"Set LED {led_idx} 'ON'.")
# ...
{'name': 'A', 'led_states': [0]},
{'name': 'B', 'led_states': [1]},
{'name': 'C', 'led_states': [0, 1]}

no process: the process is not alive, for connecting to mongo

Im using mongox fork on my elixir server
it used to work well, untill today, when I keep getting the below error:
GenServer #PID<0.23055.0> terminating
** (stop) exited in: GenServer.call(Mongo.PBKDF2Cache, {\"a9f40827e764c2e9d74318e934596194\", <<88, 76, 231, 25, 765, 153, 136, 68, 54, 109, 131, 126, 543, 654, 201, 250>>, 10000}, 5000)
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
(elixir) lib/gen_server.ex:766: GenServer.call/3
(mongox) lib/mongo/connection/auth/scram.ex:66: Mongo.Connection.Auth.SCRAM.second_message/5
(mongox) lib/mongo/connection/auth/scram.ex:25: Mongo.Connection.Auth.SCRAM.conversation_first/6
(mongox) lib/mongo/connection/auth.ex:29: anonymous fn/3 in Mongo.Connection.Auth.run/1
(elixir) lib/enum.ex:2914: Enum.find_value_list/3
(mongox) lib/mongo/connection/auth.ex:28: Mongo.Connection.Auth.run/1
(mongox) lib/mongo/connection.ex:206: Mongo.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
Last message: nil
State: %{auth: [{\"my-user\", \"mypassword\"}], database: \"my-db\", opts: [backoff: 1000, hosts: [{\"xxxxxx.mlab.com\", \"17060\"}], port: 17060, hostname: 'xxxxxx.mlab.com', name: MongoPool, max_overflow: 500], queue: %{}, request_id: 0, socket: nil, tail: nil, timeout: 5000, wire_version: nil, write_concern: [w: 1]}
after digging into the code, I figured out:
failed on this row (on the deps):
https://github.com/work-capital/mongox/blob/feature/nan_type_support/lib/mongo/connection/auth/scram.ex#L66
when trying to call to pbkdf2
which makes a genserver call
def pbkdf2(password, salt, iterations) do
GenServer.call(#name, {password, salt, iterations})
is this an error with the connecting to the mongo instance ( which is on mlab) or is it an issue with the code?
here are my configs:
config.exs
config :mongo_app, MongoApp,
host: "xxxx.mlab.com",
database: "my-db",
username: "my-user",
password: "mypass**",
port: "17060",
pool_size: "100",
max_overflow: "500"
mix.exs:
def application do
[extra_applications: [:logger, :poolboy],
mod: {MongoApp.Application, []}]
end
defp deps do
[{:mongox, git: "https://github.com/work-capital/mongox.git", branch: "feature/nan_type_support"},
{:poolboy, "~> 1.5"}
]
application.ex:
defmodule MongoApp.Application do
use Application
def start(_type, _args) do
MongoApp.connect
import Supervisor.Spec, warn: false
children = [
]
opts = [strategy: :one_for_one, name: MongoApp.Supervisor]
Supervisor.start_link(children, opts)
end
end

data lost in mongodb replica set mode

My replica set has two nodes:
1: the master node
2: a slave node with priority:0, votes:0
The oplog size is 5000MB.
run this for loop in master shell:
for (i=0;i<1000000;i++)
{
db.getSiblingDB("ff").c.insert(
{ a:i,
d:i+".#234"+(++i)+".234546"+(++i)+".568679"+(++i)+"31234."+(++i)+".12342354"+(++i)+"5346457."+(++i)+"33543465456."+(++i)+".6346456"+(++i)+"123235434."+(++i)+".2345345345"+(++i)
}
)
}
Kill the slave node while the for loop is running: kill -9 $(pidof slave_node)
Stop the for loop after a second; then restart the slave node.
Then run db.getSiblingDB("ff").c.count() to check data in both slave and master nodes, with the results:
master:20w
slave:15w
The slave node can catch up with the primary, but there is a lot of data lost from the slave.
Why is this?
Here is the slave node's log as it restarts after being killed:
2017-11-27T05:53:53.873+0000 I NETWORK [thread1] waiting for connections on port 28006
2017-11-27T05:53:53.876+0000 I REPL [replExecDBWorker-0] New replica set config in use: { _id: "cpconfig2", version: 2, protocolVersion: 1, members: [ { _id: 0, host: "127.0.0.1:28007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 3.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "127.0.0.1:28006", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 0 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5a1ba5bbb0a652502a5f002a') } }
2017-11-27T05:53:53.876+0000 I REPL [replExecDBWorker-0] This node is 127.0.0.1:28006 in the config
2017-11-27T05:53:53.876+0000 I REPL [replExecDBWorker-0] transition to STARTUP2
2017-11-27T05:53:53.876+0000 I REPL [replExecDBWorker-0] Starting replication storage threads
2017-11-27T05:53:53.877+0000 I REPL [replExecDBWorker-0] Starting replication fetcher thread
2017-11-27T05:53:53.877+0000 I REPL [replExecDBWorker-0] Starting replication applier thread
2017-11-27T05:53:53.877+0000 I REPL [replExecDBWorker-0] Starting replication reporter thread
2017-11-27T05:53:53.877+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to 127.0.0.1:28007
2017-11-27T05:53:53.877+0000 I REPL [rsSync] transition to RECOVERING
2017-11-27T05:53:53.878+0000 I REPL [rsSync] transition to SECONDARY
2017-11-27T05:53:53.879+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to 127.0.0.1:28007, took 2ms (1 connections now open to 127.0.0.1:28007)
2017-11-27T05:53:53.879+0000 I REPL [ReplicationExecutor] Member 127.0.0.1:28007 is now in state PRIMARY
2017-11-27T05:53:54.011+0000 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
2017-11-27T05:53:54.645+0000 I NETWORK [thread1] connection accepted from 127.0.0.1:52404 #1 (1 connection now open)
2017-11-27T05:53:54.645+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:52404 conn1: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.4.9" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 8 (jessie)"", architecture: "x86_64", version: "Kernel 3.10.0" } }
2017-11-27T05:53:59.878+0000 I REPL [rsBackgroundSync] sync source candidate: 127.0.0.1:28007
See the page Accuracy after Unexpected Shutdown for details and information on how to recover from this situation.

Gstreamer no element error (lamemp3enc)

I'm trying to use gstreamer at odroid C1+.
I installed gstreamer, base, good, ugly, bad, libav getting from here
https://gstreamer.freedesktop.org/modules/
following here
http://linuxfromscratch.org/blfs/view/svn/index.html
But when I run like this
gst-launch-1.0 -e pulsesrc device="alsa_input.usb-046d_0809_52A63768-02.analog-mono" ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! filesink location=audio.mp3
I get error
WARNING: erroneous pipeline: no element "lamemp3enc"
How should I do?
++
I run
GST_PLUGIN_PATH=/usr/lib/gstreamer-1.0/ gst-inspect-1.0 lamemp3enc
and get
Factory Details:
Rank primary (256)
Long-name L.A.M.E. mp3 encoder
Klass Codec/Encoder/Audio
Description High-quality free MP3 encoder
Author Sebastian Dröge <sebastian.droege#collabora.co.uk>
Plugin Details:
Name lame
Description Encode MP3s with LAME
Filename /usr/lib/gstreamer-1.0/libgstlame.so
Version 1.8.1
License LGPL
Source module gst-plugins-ugly
Source release date 2016-04-20
Binary package GStreamer Ugly Plugins 1.8.1 BLFS
Origin URL http://www.linuxfromscratch.org/blfs/view/svn/
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstAudioEncoder
+----GstLameMP3Enc
Implemented Interfaces:
GstPreset
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
audio/x-raw
format: S16LE
layout: interleaved
rate: { 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000 }
channels: 1
audio/x-raw
format: S16LE
layout: interleaved
rate: { 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000 }
channels: 2
channel-mask: 0x0000000000000003
SRC template: 'src'
Availability: Always
Capabilities:
audio/mpeg
mpegversion: 1
layer: 3
rate: { 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000 }
channels: [ 1, 2 ]
Element Flags:
no flags set
Element Implementation:
Has change_state() function: gst_audio_encoder_change_state
Element has no clocking capabilities.
Element has no URI handling capabilities.
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "lamemp3enc0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
perfect-timestamp : Favour perfect timestamps over tracking upstream timestamps
flags: readable, writable
Boolean. Default: false
mark-granule : Apply granule semantics to buffer metadata (implies perfect-timestamp)
flags: readable
Boolean. Default: false
hard-resync : Perform clipping and sample flushing upon discontinuity
flags: readable, writable
Boolean. Default: false
tolerance : Consider discontinuity if timestamp jitter/imperfection exceeds tolerance (ns)
flags: readable, writable
Integer64. Range: 0 - 9223372036854775807 Default: 40000000
target : Optimize for quality or bitrate
flags: readable, writable
Enum "GstLameMP3EncTarget" Default: 0, "quality"
(0): quality - Quality
(1): bitrate - Bitrate
bitrate : Bitrate in kbit/sec (Only valid if target is bitrate, for CBR one of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 or 320)
flags: readable, writable
Integer. Range: 8 - 320 Default: 128
cbr : Enforce constant bitrate encoding (Only valid if target is bitrate)
flags: readable, writable
Boolean. Default: false
quality : VBR Quality from 0 to 10, 0 being the best (Only valid if target is quality)
flags: readable, writable
Float. Range: 0 - 9.999 Default: 4
encoding-engine-quality: Quality/speed of the encoding engine, this does not affect the bitrate!
flags: readable, writable
Enum "GstLameMP3EncEncodingEngineQuality" Default: 1, "standard"
(0): fast - Fast
(1): standard - Standard
(2): high - High
mono : Enforce mono encoding
flags: readable, writable
Boolean. Default: false
Presets:
"Ubuntu"
what OS are you running on odroid (Android/Ubuntu?) what gives gst-inspect-1.0 lamemp3enc.. there is library path .. you can ldd it:
ldd /usr/local/lib/gstreamer-1.0/libgstlame.so
linux-vdso.so.1 => (0x00007ffc7dbed000)
libgstaudio-1.0.so.0 => /usr/local/lib/libgstaudio-1.0.so.0 (0x00007f4a97faa000)
libgstbase-1.0.so.0 => /usr/local/lib/libgstbase-1.0.so.0 (0x00007f4a97d4c000)
libgstreamer-1.0.so.0 => /usr/local/lib/libgstreamer-1.0.so.0 (0x00007f4a97a31000)
libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f4a977e0000)
libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f4a974d8000)
libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f4a9724a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4a9702c000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4a96c67000)
libgsttag-1.0.so.0 => /usr/local/lib/libgsttag-1.0.so.0 (0x00007f4a96a2d000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4a96727000)
liborc-0.4.so.0 => /usr/local/lib/liborc-0.4.so.0 (0x00007f4a964a4000)
libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007f4a9629f000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f4a96097000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4a95e93000)
libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f4a95c8a000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f4a95a4c000)
/lib64/ld-linux-x86-64.so.2 (0x0000558c95a32000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f4a95833000)
Check if you have all libs .. you need especially libmp3lame.so ..
You can always use apt-file tool to check which package provides this lib.. but its quite apparent that its this package (run as root or sudo):
apt-get install libmp3lame-dev
If you did not have it at compile time (I guess you compiled as you linked linuxfromscratch..) you need to recomiple gst-plugins-ugly after installing this lib..
Then check the last part at when you rerun configure at ugly (I would suggest using autogen.sh instead.. but I dont know your environment..) which should mention if you will have mp3 builded..
UPDATE
So you just need to set the env variable GST_PLUGIN_PATH. Gstreamer just haven't known where to look for the gstreamer mp3 plugin..
you can export the env variable.. say in ~/.basrhc:
export GST_PLUGIN_PATH=/usr/lib/gstreamer-1.0/
and then just the same pipe will work perfectly, this is simpler one (tested, works):
gst-launch-1.0 -e audiotestsrc ! audioconvert ! lamemp3enc ! filesink location=audio.mp3
HTH

can't accept new chunks because there are still 1 deletes from previous migration

I have a mongodb production cluster running in 2.6.11 with 20 replicatSets. I getting space disk issue, because the chunks majority are store in one replicatSet. When I check the log, I can see that move chunk failed because of "deletes from previous migration"
2015-12-28T17:13:32.164+0000 [conn6504] about to log metadata event: { _id: "db1-2015-12-28T17:13:32-56816dbc6b0464b0a5801db8", server: "db1", clientAddr: "xx.xx.xx.11:50077", time: new Date(1451322812164), what: "moveChunk.start", ns: "emailing_nQafExtB.reports", details: { min: { email: "xxxxxxx" }, max: { email: "xxxxxxx" }, from: "shard16", to: "shard22" } }
2015-12-28T17:13:32.675+0000 [conn6504] about to log metadata event: { _id: "db1-2015-12-28T17:13:32-56816dbc6b0464b0a5801db9", server: "db1", clientAddr: "xx.xx.xx.11:50077", time: new Date(1451322812675), what: "moveChunk.from", ns: "emailing_nQafExtB.reports", details: { min: { email: "xxxxxxx" }, max: { email: "xxxxxxx" }, step 1 of 6: 3, step 2 of 6: 314, note: "aborted", errmsg: "moveChunk failed to engage TO-shard in the data transfer: can't accept new chunks because there are still 1 deletes from previous migration" } }
I follow the answer from this question, but doesn't work for me. I run stepDown command on one primary and all my cluster primary. I do the same with the cleanUpOrphaned command.
Does somedody run over this problem ?
Thanks in advance for any insights.