What is the proper way to configure openbmc to get ipmitool fru print valid data? - openbmc

I am having problems generating the correct configuration to get ipmitool fru print to work.
What I see is:
ipmitool fru print
...
FRU Device Description : Caracal1_FRU (ID 112)
Device not present (Requested sensor, data, or record not found)
All my FRUs are listed. All have the same Device not present message.
At startup I see the following messages get displayed on the console.
Starting Read system/chassis/Caracal1_FRU EEPROM...
[ OK ] Finished Read system/chassis/Caracal1_FRU EEPROM.
I also see valid data in output of commands:
busctl introspect --no-pager xyz.openbmc_project.Inventory.Manager \ /xyz/openbmc_project/inventory/system/chassis/Caracal1_FRU
and
busctl introspect --no-pager xyz.openbmc_project.FruDevice /xyz/openbmc_project/FruDevice/Caracal
I don't understand why the difference in names betreen Inventory.Manager and FruDevice. It looks like the differences are mapped as follows:
Caracal1_FRU <=> Caracal
I have validated that I have correct data in the yaml file under build/.
I would be happy to copy meta-pavailion/ if needed.
Thanks in advance

Related

GRIB1 to GRIB2 eccodes conversion failing due to paramId = 0

So what I am trying is converting a GRIB1 file to GRIB2 containing mostly wind data.
Usually you can just change the edition to 2 and the eccodes lib does everything else.
Now the issue is that my GRIB1 file has only paramId=0 and shortName=unknown messages.
What the hell am I supposed to do? When I load it into a viewer (e.g. PredictWind Offshore) it displays just fine. Any ides on how I can convert this to GRIB2 without knowing the messages? Am I missing something?
What I already tried:
❯ grib_set -s edition=2 e1.grib e2.grib
ECCODES ERROR : concept: no match for paramId=0
ECCODES ERROR : Please check the Parameter Database 'https://apps.ecmwf.int/codes/grib/param-db/?id=0'
ECCODES ERROR : concept: input handle edition=2
ECCODES ERROR : grib_set_values[0] edition (type=long) failed: Concept no match
What my GRIB1 (e1.grib) looks like:
❯ grib_dump e1.grib | egrep 'paramId|shortName'
...
shortName = unknown;
paramId = 0;
...
❯ grib_dump dsv1.grib | egrep 'paramId' | wc -l
679 # all unknown and 0
Edit:
For future readers:
I ended up not using eccodes at all. My solution now involves using PyNIO which worked great for me. It is able to read GRIB1.
For further info contact me directly.

Where does dev_dbg writes log to?

In a device driver source in the Linux tree, I saw dev_dbg(...) and dev_err(...), where do I find the logged message?
One reference suggest to add #define DEBUG . The other reference involves dynamic debug and debugfs, and I got lost.
dev_dbg() expands to dynamic_dev_dbg(), dev_printk(), or no-op depending on the compilation flags.
#if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
#elif defined(DEBUG)
#define dev_dbg(dev, format, arg...) \
dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
if (0) \
dev_printk(KERN_DEBUG, dev, format, ##arg); \
})
#endif
dynamic_dev_dbg() and dev_printk() call dev_printk_emit() which calls vprintk_emit().
This very same function is called in a normal mode when you just do a printk(). Just note here, that the rest functions like dev_err() will end up in the same function.
Thus, obviously, the buffer is all the same, i.e. kernel intrenal buffer.
The logged message at the end is printed to
Current console if kernel loglevel value (can be changed via kernel command line or via procfs) is high enough for certain message, here KERN_DEBUG.
Internal buffer which can be read by running dmesg command.
Note, data in 2 is kept as long as there still room in the buffer. Since it's limited and circular, newer data preempts old one.
Additional information how to enable Dynamic Debug.
First of all, be sure you have CONFIG_DYNAMIC_DEBUG=y in the kernel configuration.
Assume we would like to enable all debug prints in the built-in module with name 8250. To achieve that we simple add to the kernel command line the following 8250.dyndbg=+p.
If the same driver is compiled as loadable module we may either add options 8250 dyndbg to the modprobe configuration or to the shell command line when do it manually, like modprobe 8250 dyndbg.
More details are described in the Dynamic Debug documentation.
The "How certain debug prints are automatically enabled in linux kernel?" raises the question why some debug prints are automatically enabled and how DEBUG affects that when CONFIG_DYNAMIC_DEBUG=y. The answer is lying in the dynamic_debug.h and since it's used during compilation the _DPRINTK_FLAGS_DEFAULT defines the certain message appearence.
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
#else
#define _DPRINTK_FLAGS_DEFAULT 0
#endif
you can find dev_err(...) in kernel messages. As the name implies, dev_err(...) messages are error messages, so they will definitely be printed if the execution comes to that point. dev_dbg(...) are debug messages which are more generously used in the kernel driver code and they are not printed by default. So everything you have read about dynamic_debugging comes into play with dev_dbg(...).
There are several pre-conditions to have dynamic debugging working, below 1. and 2. are general preconditions for dynamic debugging. 3. and later are for your particular driver/module/subsystem and can be .
Dynamic debugging support has to be in your kernel config CONFIG_DYNAMIC_DEBUG=y. You may check if it is the case zgrep DYNAMIC_DEBUG /proc/config.gz
debugfs has to be mounted. You can check with sudo mount | grep debugfs and if not existing, you can mount with sudo mount -t debugfs /sys/kernel/debug
refer to dynamic_debugging and enable the particular file/function/line you are interested

train.py error in ibm watson retrieve and rank service setup

I'm following the retrieve and rank tutorial and everything is good until the train.py script - I get error "ValueError: No JSON object could be decoded"
my command line with masked creds:
python ./train.py -u "zzzz":"ssss" -i /Users/nik/Downloads/cranfield_gt.csv -c "zzzz" -x example_collection -n "example_ranker"
result:
Input file is /Users/nik/Downloads/cranfield_gt.csv
Solr cluster is zzzz
Solr collection is example_collection
Ranker name is example_ranker
Rows per query 10
Generating training data...
Command:
curl -k -s -u zzzz:ssss -d "q=what similarity laws must be obeyed when constructing aeroelastic models of heated high speed aircraft.&gt=184,3,29,3,31,3,12,2,51,2,102,2,13,1,14,1,15,1,57,3,378,3,859,3,185,2,30,2,37,2,52,1,142,1,195,1,875,3,56,2,66,2,95,2,462,1,497,2,858,2,876,2,879,2,880,2,486,0&generateHeader=true&rows=10&returnRSInput=true&wt=json" "https://gateway.watsonplatform.net/retrieve-and-rank/api/v1/solr_clusters/zzzz/solr/example_collection/fcselect"
Response:
Traceback (most recent call last):
File "./train.py", line 88, in <module>
parsed_json = json.loads(output)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Ideas why I get this error and most of all how to resolve it?
Thanks,
Nik
OK this is one of those late night experiences .. I was convinced that I did uploaded my cranfield_data.json file, but checking it today showed me that I didn't.
Runnig the script today to upload it again and seeing the confirmation was the key.
After that I've repeated the train.py and everything worked!
I hope this helps someone else too.
BTW just before uploading the first time I've tried to update and recompile curl. It seems that I did not configure it to use https and I guess I did not payed attention when I executed the curl to upload cranfield_data.json the first time.
Today I saw the error "protocol "https" not supported" and this helped me understand what happened before. Restoring the original curl on my mac resolved the issue.

wget files from FTP-like listings

So, site that used to use FTP now has an HTTP front-end and won't allow FTP connections. The site in question (for an example directory) will show a page with links to different dates. Inside each of these different dates, there are many files, and I typically just need to get some file with some clear pattern e.g. *h17v04*.hdf. I thought this could work:
wget -I "${PLATFORM}/${PRODUCT}/${YEAR}.*" -r -l 4 \
--user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" \
--verbose -c -np -nc -nd \
-A "*h17v04*.hdf" http://e4ftl01.cr.usgs.gov/$PLATFORM/$PRODUCT/
where PLATFORM=MOLT, PRODUCT=MOD09GA.005 and YEAR=2004, for example. This seems to start looking into all the useful dates, finds the index.html, and then just skips to the next directory, without downloading the relevant hdf file:
--2013-06-14 13:09:18-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/
Reusing existing connection to e4ftl01.cr.usgs.gov:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html'
[ <=> ] 174,182 134K/s in 1.3s
2013-06-14 13:09:20 (134 KB/s) - `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html' saved [174182]
Removing e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html since it should be rejected.
--2013-06-14 13:09:20-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.02/
[...]
If I ignore the -A option, only the index.html file is downloaded to my system, but it appears it's not parsed and the links are not followed. I don't really know what more is required to make this work, as I can't see why it doesn't!!!
SOLUTION
In the end, the problem was due to an old bug in the local version of wget. However, I ended up writing my own script for downloading MODIS data from the server above. The script is pure Python, and is available from here.
Consider to use pyModis instead of wget which is a Free and Open Source Python based library to work with MODIS data. It offers bulk-download for user selected time ranges, mosaicking of MODIS tiles, and the reprojection from Sinusoidal to other projections, convert HDF format to other formats. See
http://www.pymodis.org/

hadoop-streaming example failed to run - Type mismatch in key from map

I was running $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-D stream.map.output.field.separator=. \
-D stream.num.map.output.key.fields=4 \
-input myInputDirs \
-output myOutputDir \
-mapper org.apache.hadoop.mapred.lib.IdentityMapper \
-reducer org.apache.hadoop.mapred.lib.IdentityReducer
What hould be the input file when IdentityMapper is the mapper?
I was hoping to see it can sort on certain selected keys and not the entire keys. My input file is simple
"aa bb".
"cc dd"
Not sure what did I miss? I always get this error
java.lang.Exception: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable
This is a known bug and here is the JIRA. The bug has been identified in Hadoop 0.21.0, but I don't think it's in any of the Hadoop release version. If you are really interested to fix this, you can
download the source code for Hadoop (for the release you are working)
download the patch from JIRA and apply it
build and test Hadoop
Here are the instructions on how to apply a patch.
Or instead of using an IdentityMapper and the IdentityReducder, use a python/perl scripts which will read the k/v pairs from STDIN and then write the same k/v pairs to the STDOUT without any processing. It's like creating your own IdentityMapper and the IdentityReducder not using Java.
I was trying my hands on Hadoop with my own example, but got the same error. I used KeyValueTextInputFormat to resolve the issue. You can have a look at following blog for the same.
http://sanketraut.blogspot.in/2012/06/hadoop-example-setting-up-hadoop-on.html
Hope it helps you.
Peace.
Sanket Raut