Raspbian enable SPI module - raspberry-pi

sorry if this is not the right forum for this question but I
can't find the answer anywhere. I'm working on a raspberry pi project
which requires the SPI module to be loaded. I can't get it to load.
Here's what I've done
sudo apt-get update
sudo apt-get upgrade
sudo rpi-update
Here's what my blacklist.conf file looks like
#blacklist spi and i2c by default (many users don't need them)
#blacklist spi-bcm2708
blacklist i2c-bcm2708
I've rebooted several times with no luck. When I run sudo uname -a I get
Linux raspberrypi 3.18.5+ #744 PREEMPT Fri Jan 30 18:19:07 GMT2015 armv6l GNU/Linux

See http://www.raspberrypi.org/forums/viewtopic.php?f=28&t=97314
Fixed my i2c and one-wire interfaces.
This is required with the new kernal upgrade to 3.18.5 on Jan 21st.

You should have it enabled. You did not specify how you test if it works.
What I suggest
Check if you have it enabled using lsmod | grep spi_ or ls -al /dev/spi*
If it does not work in your program try sudo adduser pi spi (if you use the pi user)
a) Download http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/spi/spidev_test.c
b) compile it with gcc spidev_test.c -o spidev_test. If you get compilation error try downloading this file and compiling it: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/spi/spidev_test.c?id=95b1ed2ac7ffe3205afc6f5a20320fbdb984da92 (it is older version of this file)
c) shorten your MOSI and MISO pins on your Raspberry ( http://neophob.com/wp-content/uploads/2012/08/254px-GPIOs.png pins 9 and 10 on this schematics, but please double check what pins you should shorten on schematics for your raspberry)
d) run the compiled program sudo ./spidev_test -D /dev/spidev0.0
e) if it returns
FF FF FF FF FF FF
40 00 00 00 00 95
FF FF FF FF FF FF
FF FF FF FF FF FF
FF FF FF FF FF FF
DE AD BE EF BA AD
F0 0D
it works and you might have some issue with your program or with connection to some other device.

Related

making bare metal simulation work (%rip sets to zero when starting to run)

and thanks for your previous help in How to make "%bp.hap.run-until name = X86_HLT_Instr" work?
My next obstacle is that %rip magically turns zero when I start running.
My test program:
#include <simics/magic-instruction.h>
__attribute__((noinline))
void MagicBreakpoint() {
MAGIC_BREAKPOINT;
asm volatile ("hlt");
}
extern "C" void _start() {
asm volatile ("mov $42, %rax");
MagicBreakpoint();
}
0000000000401000 <_Z15MagicBreakpointv>:
401000: 53 push %rbx
401001: b8 11 47 00 00 mov $0x4711,%eax
401006: 0f a2 cpuid
401008: f4 hlt
401009: 5b pop %rbx
40100a: c3 retq
40100b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
0000000000401010 <_start>:
401010: 48 c7 c0 2a 00 00 00 mov $0x2a,%rax
401017: e9 e4 ff ff ff jmpq 401000 <_Z15MagicBreakpointv>
What I want to see is the execution starting from _start, setting %rax to 42, then hitting the magic instruction, then exiting.
Instead, the execution starts from %rip=0.
my script:
run-command-file "%simics%/targets/qsp-x86/firststeps-no-network.simics"
$start = ($system.mb.cpu0.core[0][0].load-binary ./small)
$system.mb.cpu0.core[0][0].set-pc $start ## Special command for the PC
$system.mb.cpu0.core[0][0].write-reg "rsp" 0x7fffffffdf50
enable-magic-breakpoint
print -x %rip
print -x %rsp
step-instruction
print -x %rip
quit
./simics -no-gui t2.simics
Intel Simics 6 (build 6096 linux64) Copyright 2010-2021 Intel Corporation
Use of this software is subject to appropriate license.
Type 'copyright' for details on copyright and 'help' for on-line documentation.
[board.mb.cpu0.core[0][0] info] VMP disabled. Failed to open device.
WARNING: Simics failed to enable VMP. Enabling VMP substantially improves
simulation performance. The problem is most likely caused by the
vmxmon kernel module not being properly installed or updated.
See the "Simics User's Guide", the "Performance" section,
for instructions how to setup VMP.
Welcome to Simics!
An x86 target machine, referred to as a Quick Start Platform (QSP)
in the documentation, has been just created.
To start the simulation, enter the command "run" (or simply "r") at
the Simics prompt. This will boot Linux and automatically log you in.
You will see the login appear in the serial console window.
Note that during the boot Linux will emit a couple
of harmless warning messages related to ACPI errors.
To pause the simulation, use the command "stop". To resume simulation,
enter the command "run" again.
0x401010
0x7fffffffdf50
[board.mb.cpu0.core[0][0]] Exception: General_Protection_Exception
0x0
As you can see, before executing step-instruction, %rip is 0x401010, and right after step-instruction, %rip is zero.
Your problem is that the x86 cpu starts in 16-bit legacy mode, but your code in 64-bit code.
With this target you could try running it until it reaches 64-bit mode, before loading and executing the binary:
simics> bp.hap.run-until name = Core_Mode_Switch index = 5
When simulation stops you should be in 64-bit mode (which index 5 will specify). You can check current execution mode by running the pregs command.
At this point it should likely work to run you code starting with "$start =".

How are 64-bit Unicode characters encoded?

I'm trying to find a way to learn the encoding for 64-bit characters (mostly Chinese) that I encounter. For example, the encoding for '好' ("hǎo", good) is
597d. But entering:
echo 好|od -t x1
in Linux Mint gives a result of:
0000000 e5 a5 bd 0a
0000004
What is the rule for translating "e5 a5 bd 0a" to "597d" ?

J1939 RTR Issue

I have and issue with rtr frames using candump and cansend.
Dumping the broadcasted data is no issue.
Architecture -
Raspberry pi with a pican shield reading data from a J1939 simulator.
I run candump to receive all messages on the bus. Then get an ack frame back from the simulator when I execute a cansend for pgn feec. Im requesting a preprogrammed VIN but I get nothing back. Here is what Im seeing from candump:
can0 18FEF500 [8] 7D FF FF 40 25 4B FF FF '}..#%K..'
can0 18FEE900 [8] D1 4B 03 00 D1 4B 03 00 '.K...K..'
can0 18FEF700 [8] FF FF FF FF E0 01 FF FF '........'
can0 18FECA00 [8] 03 FF 00 00 00 00 00 00 '........'
can0 00FEEC00 [0] remote request
can0 18E80000 [8] 01 FF FF FF FF EC FE 00 '........'
can0 0CF00300 [8] FF 7D 7D FF FF FF FF FF '.}}.....'
can0 18FE6C00 [8] FF FF FF FF FF FF 80 7D '.......}'
can0 0CF00400 [8] FF FF 7D 80 7D FF FF FF '..}.}...''
The E800 PGN is a standard ack message.
And message I am sending while candump is running:
cansend can0 00feec00#r
Basically, I'm not getting the PGN for VIN back. Any ideas?
Turns out there are a couple of issues here.
1- #r is not supported with J1939
2- you don't request pgns by asking for that pgn directly. the method is to send data to a specific pgn which handles requests. example below:
EA 00 is the PGN to send data to. Inside the data message lives the pgn we want to request (LSB) so PGN FEE5 is now E5FE. Three bytes are required which is why 00 is in the message below.
Here is the working request for Engine Hours:
cansend 18EA00FF#E5FE00
and the reponse:
21 00 00 00 8F 01 00 00

Why does my Perl CGI program return a server error?

I recently got into learning cgi and I set up an Ubuntu server in vbox. The first program I wrote was in Python using vim through ssh. Then I installed Eclipse on my Windows 7 station and created the exact same Perl file; just a simple hello world deal.
I tried running it, and I was getting a 500 on it, while the Python code in the same dir (/usr/lib/cgi-bin) was showing up fine. Frustrated, I checked and triple-checked the permissions and that it began with #!/usr/bin/perl. I also checked whether or not AddHandler was set to .pl. Everything was set fine, and on a whim I decided to write the same exact code within the server using vim like I did with the Python file.
Lo and behold, it worked. I compared them, thinking I'd gone mad, and they are exactly the same. So, what's the deal? Why is a file made in Windows 7 on Eclipse different than a file made in Ubuntu server with vim? Do they have different binary headers or something? This can really affect my development environment.
#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "Testing.";
Apache error log:
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] (2)No such file or directory: exec of '/usr/lib/cgi-bin/test.pl' failed
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] Premature end of script headers: test.pl
[Tue Aug 07 12:32:02 2012] [error] [client 192.168.1.8] File does not exist: /var/www/favicon.ico
This is the continuing error I get.
I think you have some spurious \r characters on the first line of your Perl script when you write it in Windows.
For example I created the following file on Windows:
#!/usr/bin/perl
code goes here
When viewed with hexdump it shows:
00000000 23 21 2f 75 73 72 2f 62 69 6e 2f 70 65 72 6c 0d |#!/usr/bin/perl.|
00000010 0a 0d 0a 63 6f 64 65 20 67 6f 65 73 20 68 65 72 |...code goes her|
00000020 65 0d 0a |e..|
00000023
Notice the 0d - \r that I've marked out in that. If I try and run this using ./test.pl I get:
zsh: ./test.pl: bad interpreter: /usr/bin/perl^M: no such file or directory
Whereas if I write the same code in Vim on a UNIX machine I get:
00000000 23 21 2f 75 73 72 2f 62 69 6e 2f 70 65 72 6c 0a |#!/usr/bin/perl.|
00000010 0a 63 6f 64 65 20 67 6f 65 73 20 68 65 72 65 0a |.code goes here.|
00000020
You can fix this in one of several ways:
You can probably make your editor save "UNIX line endings" or similar.
You can run dos2unix or similar on the file after saving it
You can use sed: sed -e 's/\r//g' or similar.
Your apache logs should be able to confirm this (If they don't crank up the logging a bit on your development server).
Sure, it can.
One environment might have a module installed that the other might not.
Perl might be installed in different locations in the two environment.
The environments might have different versions of Perl.
The environments might have different operating systems.
The permissions might be setup incorrectly in one of the environments.
etc
But instead of speculating wildly like this, why don't you check the error log for what error you actually got?
No, they are just text files. Of course, it's possible to write unportable programs, trivially by using system() or other similar services which depend on the environment.

KDE won't automount dvd on CentOS

This is driving me crazy. I have CentOS 5.5 installed running KDE desktop. I have an NEC 3550 DVDRW drive on /dev/hda. When I put in a DVD, I want it to automount it and provide an icon on the desktop, as well as under /media mount point. It will not automount. Automount is running. HALD is running. Drive is on /dev/hda. It is NOT listed in /etc/fstab. There is NOT a remove policy setup for hald-addon-storage for polling. I can read from the drive using dd. K3B burn utility can see the drive and read disk info. Running eject and eject -t ejects the drive ok.
I cannot mount from the command line. Says:
mount: block device /dev/hda is write-protected, mounting read-only
mount: wrong fs type, bad option, bad superblock on /dev/hda,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg says:
ide: failed opcode was: unknown
ATAPI device hda:
Error: Illegal request -- (Sense key=0x05)
Cannot read medium - incompatible format -- (asc=0x30, ascq=0x02)
The failed "Read Subchannel" packet command was:
"42 02 40 01 00 00 00 00 10 00 00 00 00 00 00 00 "
hfs: unable to parse mount options
attempt to access beyond end of device
hda: rw=0, want=68, limit=4
isofs_fill_super: bread failed, dev=hda, iso_blknum=16, block=16
To me, seems like some kind of media format issue, but I have no idea. Ideas?
no real solution, started working on its own.