at the Extreme Measurement Communications Center
of the Oak Ridge National Laboratory
I have need of a decent MSP430 simulator, and I can't seem to find any documentation for msp430simu, which is an auxiliary part of the MSPGCC project. What follows are some unorganized observations that I've made regarding the code and my use of it to render LaTeX for my upcoming presentation at TIDC '08. Forgive me if it isn't terribly coherent or well organized: It's better than nothing and it ought to save you a lot of time if you find yourself in the same position as I find myself.
The simulator runs code targeted toward a msp430x135, built with mspgcc. IAR could likely be used as well, see TI EZ430 in Linux with IAR Kickstart for details.
Caveat lector--I had no prior experience with Python, and I wrote this code as a quick hack to generate my slides, not as something to release or maintain.
Be sure to download and review the msp430simu code. This article will make little sense without it. I expect my readers to get their hands dirty!
cvs -d:pserver:firstname.lastname@example.org:/cvsroot/mspgcc login
cvs -z3 -d:pserver:email@example.com:/cvsroot/mspgcc co msp430simu
After grabbing the source, the Makefile was sufficiently self-explanatory to get an example project up and running. I wished to demonstrate a string copy by dumping the result to LaTeX, so I threw in a dump to disassemble the code and spit it to a file. This worked well, except that my code would behave unpredictably. Adding a single branch would change the execution time from 23 ticks to a timeout after a few thousand ticks!
My mistake was in forgetting to copy PC before calling core.disassemble, which--God only know why--advances the PC to the next address. Thus, whenever I disassembled an address, I'd accidentally advance the PC twice after executing a single instruction!
The repaired code, cited below, copies PC and decodes the copy. This may be called without damaging the execution, and does not alter the simulation's results.
#Decode a copy of the pc, so the PC itself isn't advanced.
name, args, execfu, cycles=self.disassemble(pc);
#FILE I/O goes here
Variables within RAM can be read between instruction executions by calling core.memory.get(). By calling this between instructions, it's possible to watch variables. In my case, rather than stepping through hundreds of slides to get to the fun stuff, I can instead only print a slide when the watched variables change. How cool is that?
The only difficult part here is that you have to know the address of the object you wish to view. Luckily, with optimizations disabled, GCC begins them at the start of ram--0x0200 for the msp430x135. Just as in the heap of an architecture with memory to spare for malloc(), global variables begin at the bottom and grow upward while the stack begins at the top of RAM and grows downward. My globals follow:
char *foo="Hello world.";
const char *bar="Hey.";
Of course, I'm liable to screw this up if I predict the compiler's actions, so I double-check variable addresses with gdb. In the case of int r=0xBEEF being the first global, I find that:
(gdb) x/h 0x200
0x200 <_r>: 0xbeef
Note that the default value--0xBEEF in this instance--does not exist during the beginning of actual execution of the program. (An early revision of this article erroneously stated that this was never set. This mistake was a result of a bug in my code.) Value--as all of RAM--is initialized to 0x0000. It is only loaded to its specified value during the resetvector function, which is generated by the compiler.
Following _r are two strings. Rather, two pointers to strings, which belong to the two global strings that I instantiated:
(gdb) x/xh 0x200
0x200 <_r>: 0xbeef
0x202 <foo>: 0x1170
0x204 <bar>: 0x117d
(gdb) x/s 0x1170
0x1170 <test_puts+48>: "Hello world."
(gdb) x/s 0x117d
0x117d <test_puts+61>: "Hey."
Note the common C fallacy that I accidentally committed. Not only my const char* but also my char* are RAM pointers to ROM strings. The values will be loaded by the resetvector, but it will make tracing more difficult when I add string value dumping later.
Thus, I change my C code to
#define bar "Hey."
char foo="Hello world.";
And I now get in GDB:
(gdb) x/xh 0x200
0x200 <_r>: 0xbeef
(gdb) x/s 0x202
0x202 <foo>: "Hello world."
Now the string foo exists in RAM at 0x202. That is to say that &(foo)==202; earlier, &foo==202. This is much easier to find by address when watching variables.
Grabbing an integer is easy, just make a function like the following:
def getint(self, addr, bytemode=0):
return self.memory.get(addr, bytemode);
To grab a string--which in my examples is a character array rather than a pointer to a character--, read and concatenate a series of integers.
def getstr(self, addr):
Note that bytemode is set to 1 so as to receive single bytes rather than full (16-bit) words. Printing self.getstr(0x202) while running the simulation runs strcpy(foo,bar) gives me the following:
This works, but it's instruction-accurate. Few people have the patience to sit through a 90-minute lecture on machine-language. No one has the patience to sit through such a lecture when it takes four slides to copy a byte. To keep my audience awake, my code only prints a slide when something interesting happens, so the result is not a frame-by-frame record of execution but just slices of time at which watched variables change.
For use in LaTeX, it's necessary to sanitize the string output, particularly if it is to later be corrupted. As this is for a conference presentation--rather than a paper--I use Beamer to generate a PDF slideshow, pdf2oo to generate an OpenOffice Impress presentation, and OpenOffice to export to PowerPoint.
This article is a continued as MSP430simu and LaTeX, part 2.