Wednesday, December 24, 2008

Need for Speed Carbon

As a Christmas treat, I bought myself Need for Speed Carbon (Mac OS X version). It turns out they used TransGaming Cider, a derivative of the WINE project, to port the game. I don't have a Windows version of the game to compare, but it looks like they stored a fully-installed Windows version of the game under "Need for Speed For Speed". The second "Need For Speed" is the actual game executable. The top-level one is just an annoying game serial number checking wrapper.

Here is a list of open source programs it contains, possibly as a dependency of WINE.
  • FreeType 2.3.4
  • SDL (Simple DirectMedia Layer) 1.2.10
  • libjpeg 6b
  • libpng 1.2.13
  • libsquish (2006).
  • TransGaming Cider build 472, based on wine.
The game uses DirectX 9, and this appears also what Cider supports up to.

According to TransGaming Cider product page, a large number of high profile games are ported to Mac this way, especially Spore. A consequence of that approach is these games will never be universal binaries. WINE essentially takes unmodified Windows x86 .exe and substitutes only the essential Windows functionalities to allow the game to run. However, x86 binaries require an Intel processor. They won't run on PowerPC.

I'm not sure what to make out of this. Game companies don't write games to be portable, and Cider really is just a band-aid solution. I could have bought the Windows version of the game instead.

Tuesday, December 16, 2008

GNOME vfs mime-type problem

Every once in a while, I seem to suffer a problem where GNOME does not properly recognize mime-type of my files anymore. For example, the PDF file on my desktop is treated as a text file by default, to be opened by "Text Editor" while it's supposed to be opened by "Document Viewer" (left figure depicts the problematic one, while the right figure depicts how it's supposed to look like).

Even if I tried to open the PDF directly with Evince, it tells me that it is unable to open a document of the text/plain type, as seen in the figure below.
Application launchers on my desktop (here shown for ies4linux) are also not recognized. The files are shown as *.desktop files, and when I double click on it, Text Editor comes up and shows me the metadata for the application launcher as opposed to starting the application.
It turns out the problem is that file permissions under /usr/share/mime is messed up. Only root can read and write the files, and nobody else can read them. These files are part of the shared mime-info database generated by /usr/bin/update-mime-database (typically run by package manager as root), and this is how GNOME vfs (or any compliant desktop environment such as KDE) determines the mime-type of a file.
The reason these files are generated with root-only access is that I set my umask to a pretty strict 0077 (own user full access, group and others no access), and sometimes the update-mime-database program runs under that umask. Not sure whose fault it is because I've never seen package manager suffer that problem. But here is how to fix it.
umask 0022
sudo /usr/bin/update-mime-database /usr/share/mime
And that fixes the problem.

Saturday, December 6, 2008

Reversible Computer

Sounds like an interesting idea to me, so I spent a few minutes reading Carlin James Vieri's master's thesis, Pendulum: A Reversible Computer Architecture, approved by Thomas F. Knight Jr. the thesis supervisor.

I skimmed quickly through page 23, and I decided this thesis is bullocks.

The idea of reversible computing is that a computer should never have to flip a bit, which causes energy loss. Instead, bits should be exchanged so the total number of 0 and 1 bits are conserved. Chapter 3 of the thesis suggests that load-store instructions should be replaced with exchange. I have a simpler approach.

If you implement a 32-bit machine, use 64-bit words with exactly 32 zero bits and 32 one bits. The 64-bit word is divided into two 32-bit parts, the useful part and the useless part. The idea is that a 32-bit word cannot have more than 32 zeros or 32 ones, so we can freely allocate these zero and ones to the "useful" part of the 64-bit word, and the rest temporarily deposited in the "useless" part of the 64-bit word. Computation is carried out by permuting bits between the useful and useless parts.

My model guarantees that storage requirement compared to a traditional computing machine is bounded by a factor of 2. And existing programs can run unmodified on the new reversible computer.

The thesis goes on to explain reversible and irreversible operations in section 3.2.2. It says that:
Certain datapath functions of two operands, such as XOR, have well defined inverses which allow one operand to be reconstructed unambiguously from the result and the second operand. We call these functions reversible because they may be undone: the inputs may be reconstructed from the outputs. Other functions exhibit data-dependent reversibility.
So far so good. Let's continue.
For example, summing two numbers is reversible unless the sum produces an overflow. Multiplication is also reversible unless an overflow or underflow is produced; multiplying a number by zero is reversible if the result and the non-zero operand are saved.
Now this guy is speaking non-sense. Computers perform finite-domain computation with modulo integer arithmetic. It is well-known (at least to students who studied modern algebra) that addition modulo n has an inverse. Overflow (or underflow) merely means the result "wraps around" because of the modulo. Also, multiplication has inverse only under very specific conditions: (1) the set does not contain zero, and (2) any sequence of multiplication on elements in the set cannot give you multiples of n, which becomes zero after modulo. The unit group U(n) trivially satisfies these two conditions, and it has been shown that any other integer multiplicative groups must be isomorphic to an external product of two or more unit groups U(n1) × U(n2) × ... × U(nk).

The point is, the author does not understand when multiplication is inversible. It then contradicts itself by saying:
And a few operations of interest in traditional programming languages and architectures, such as logical AND, are irreversible in the sense that the result and one operand are never sufficient to determine the second operand.
The thing is, any computer science student knows that logical AND is like multiplying two {0, 1} numbers; logical XOR is like adding {0, 1} modulo 2. Why does the author say that multiplication is reversible and goes on to say that logical AND is not? He fails even basic computer science.

You think MIT kids are smart? This guy writes gibberish on his master's thesis and graduated from MIT. He's smart. Anyway, the rest of the thesis seems to be designing an ALU based on these restricted operations (using exchange as opposed to load-store), something that sophomore year college students are asked to do.

I surely hope Pendulum is not state of the art for reversible computer. The idea surely sounds interesting, but I'd be very worried if these designers of computer architecture can't even grasp basic mathematical facts.

Tuesday, November 11, 2008

Windows Vista / Windows 7

64-bit resource consumption supported by 32-bit extensions and a graphical shell for a 16-bit patch to an 8-bit operating system originally coded for a 4-bit microprocessor, written by a 2-bit company that can't stand 1-bit of competition.

Friday, October 10, 2008

Using Perl like awk and sed

It looks like the designer of Perl really wanted to make it a viable awk and sed alternative. It is possible to run perl using command line flags that makes it behave much like awk and sed.

The simplest use, like egrep (outputs all lines that matches a regular expression), is this. The commands listed below are all equivalent.
cat | egrep 'pattern'
cat | awk '/pattern/ { print }'
cat | sed -n '/pattern/ p'
cat | perl -ne 'print if /pattern/'
Here we look at the Perl case more closely. The statement print if /pattern/ is certainly valid Perl code. It is carefully designed so:
  1. The syntax "statment if expression" is the same as 'if (expression) { statement; }'.
  2. The expression for pattern matching, usually written as '$value =~ /pattern/', can be abbreviated as simply '/pattern/' or 'm{pattern}'. The default value is drawn from $_ (a built-in variable).
  3. If the argument to print is missing, it prints the value of $_.
Alternatively, we can write instead:
cat | perl -ne '/pattern/ and print'
which is the same thing, relying on the fact that the 'and' operator short-circuits.

The command line flags -ne accomplish the following:
  • -e is used to specify the expression to evaluate.
  • -n wraps the expression inside a while loop that places each input line into $_ and evaluate the expression.
Alternatively, there is also a -p flag which replaces -n, and it allows Perl to simulate sed:
  • -p wraps the expression inside a while loop, placing each input line into $_, evaluate the expression which manipulates $_, and prints $_, the result.
Here is an example (note that awk, sed and Perl have slightly different regular expression syntax and flags):
cat | sed 's/pattern/replacement/flags'
cat | perl -pe 's/pattern/replacement/flags'
Again, this works because regular expression substitution in perl, normally written as '$value =~ s/pattern/replacement/flags' or '$value =~ s{pattern}{replacement}flags', operates on $_ by default.

Here are a few flags that make Perl more awk like, with field separators.
  • -l makes each print statement output a record separator that is the same as input record separator (newline by default).
  • -Fpattern is used to specify input field separator, much like awk's -F option.
  • -a turns on the autosplit mode, so input fields are placed into @F array.
A good mnemonic is perl -Fpattern -lane 'expression'. Example:
cat /etc/passwd | awk -F: '{ print $1 }'
cat /etc/passwd | perl -F: -lane 'print @F[0]'
Note that Perl fields are @F[0], @F[1], ...; awk fields are $1, $2, ... instead. However, awk $0 (the whole input line) corresponds to $_ in Perl.

If we want to combine regular expression matching and field separation, we might have something like:
find . | awk -F/ '/hw[0-9]+/ { print $1 }'
find . | perl -F/ -lane 'print @F[0] if /hw[0-9]+/'
Many awk variables have their Perl equivalents as well. However, in order to use them, the -MEnglish flag must be passed to Perl like this:
cat | awk '{ print NR, $0 }'
cat | perl -MEnglish -ne 'print $NR, " ", $_'
Most notably, the commas in the Perl print statement does not normally print out an output field separator. To get a behavior more like awk, do this:
cat | awk 'BEGIN { OFS = ": " } { print NR, $0 }'
cat | perl -MEnglish -ne 'BEGIN { $OFS = ": " } print $NR, $_'
In conclusion, Perl does seem very ambitious to make itself very awk or sed like. Both sed and awk also come with pretty comprehensive programming constructs, but it is nice how Perl is like a grand unified text processing and reporting tool.

SSH agent on Leopard

I've been using this SSH Agent (featuring Puffy in businessman suit and a briefcase) for some time since Mac OS X Panther (10.3). It worked for a while on Leopard, until I believe 10.5.5 update broke it. It no longer can access the Keychain for passphrase.

It turns out that Leopard has native ssh agent Keychain support. I had to remember removing the SSH_AUTH_SOCK entry from my ~/.MacOSX/environment.plist file (I previously configured it to the former SSH Agent). After logging out and logging back in, it now points to the socket of the Leopard ssh agent. Running ssh-add -K (the -K option is Mac OS X specific) works.

Not using my former SSH Agent has the added benefit that my machine no longer has to start Rosetta when I login. SSH Agent was compiled for PowerPC, not Intel Mac.

Wednesday, September 10, 2008

Minix3 in VMware Player

Here are a few installation tips:
  • Use easyvmx to create a virtual machine configuration.
  • It appears that 256MB of memory should be enough. This can be changed later in VMware player.
  • Network driver should be vlance (which appears as AMD Lance to Minix).
  • The installation is going to be much faster if you download the Minix ISO (IDE-3.1.2a.iso) and fix that image as the second CDROM drive in VMware player. Leave the first drive on auto detect.
  • Minix doesn't support more than 4GB hard drive space. When partitioning, I use 1024 MB for /home and the rest for /usr (which gives about 3000MB).
  • Sound driver: Ensonic ES1371.
  • Disable serial and parallel ports. On my machine I don't have access to them as a normal user.

Saturday, September 6, 2008

Funny how people use scripting languages...

As a reflection to Firefox TraceMonkey benchmark against Google Chrome V8, the trend that JavaScript will become a common run-time language is emerging. JavaScript is dynamically typed, and now fairly optimized in performance. More and more applications are written for the web 2.0 platform, in JavaScript and HTML.

In addition, as we hope that strongly typed systems will be adoped more for debugging aid, serious programmers will probably write applications in something like Google Web Toolkit, which is a compiler from a subset of Java—a typed language—to JavaScript. This further adds to the impression that JavaScript can be used as a common run-time object language for other source languages to compile into.

Before JavaScript, there is C, which is used by source languages like ATS, Haskell (for bootstraping), and SmallTalk (also for bootstraping). Looks like JavaScript might becoming to assume the role of C as an object language on the Web platform.

Thursday, September 4, 2008

Hard to find bug

A while back, I implemented an array based queue. For the purpose of lock-free wait-free operations, the two indices for front and rear are strictly increasing, and the corresponding index into the array is computed by taking the modulo of array length. It is designed that way so when two threads modify the front or rear pointers, they can detect when they're in each other's way (using compare and swap) and help along the other thread to make progress, rather than just spinning and waiting.

The problem is this. In a practical computer, these front and rear pointers are represented in finite integers (most likely 32-bit), and they will wrap around after a long time. If the length does not divide 232, then 232 modulo array length should be non-zero, while the wrap around makes the pointers 0, and 0 modulo length is zero. This causes discontinuity in array access, and it only happens after manipulating the queue for a very long time.

The fix is to force array length to be a power of 2, which divides 232, so 232 modulo length of the array is 0 just like the integer wrap-around makes it so.

Wednesday, September 3, 2008

Interview Questions

Here are some of the programming interview questions that I came up with. These are not questions from someone else. Their similarity to existing questions are purely coincidental.
  • Suppose you are given two numbers represented by a list of their prime factors and the power (e.g. 2520 = 23 × 32 × 5 × 7, and 2772 = 22 × 32 × 7 × 11),

    • Determine what is a good representation of numbers as prime factors; and
    • Write the gcd (greatest common denominator) and lcm (least common multiplier) for two arbitrary numbers.
  • Suppose I give you a function (predicate) that takes an integer and tells you whether the number is smaller than or equal to some integer constant only the function knows about. You can assume the hidden constant is an integer greater than or equal to 0. Write a function that tries to guess the hidden integer (hint: use binary search, but watch out for subtle corner cases).

Language features that I cannot live without

  • Mesh implementations around for code reuse. Also a detailed specification so the compiler would warn when a particular combination may not produce the desired result (i.e. combining an iterator with O(logn) collection lookup does not result in O(n) algorithm).
  • Detailed stack trace when an unhandled exception happens. Requires:

    • Tracing stack frames.
    • Converting code address to symbols.
    • Look-up function type from symbol (so we know how to print the arguments).
    • The facility to convert any run-time value to a string.
  • Unit testing. Also needs to convert any run-time value to a string for printing the test case that fails expectation.
Build system
  • Like ocamlbuild, specify the source code for "main" executable and automatically figures out the dependencies.
  • Should be able to aggregate source files into an artifact (i.e. library) and build around the artifacts.
  • Leverage external Makefile targets when necessary.
  • Need a way to automatically generate documentation from source code. The detail would be written as special comments.

Update: (May 29, 2009)

Higher order functions
  • Lexically scoped closures. The closure doesn't have to be heap allocated. I think most closures I use do not escape.
  • Higher-order function inlining. A lot of time I use higher order function to customize an algorithm, like list iterator. In these cases, inlining makes it as efficient as writing the algorithm by hand everytime I use it.

Saturday, May 10, 2008

Compiling hfstar on Mac OS X 10.5 Leopard

Oh joy, trying to compile old stuff on a new system. This article describes the hoops I had to jump through before being able to compile hfstar on Mac OS X 10.5 Leopard successfully.

The first problem, before even running the configure script, is that one of the authors, Fran├žois Pinard, puts his name in the source code in ISO-8859-1 encoding. It needs to be specified when generating gettext .po files. Edit the file po/, find the line that reads something like this:
$(srcdir)/$(PACKAGE).pot: $(POTFILES)
$(XGETTEXT) --default-domain=$(PACKAGE) --directory=$(top_srcdir) \
--add-comments --keyword=_ --keyword=N_ \
And add another option, --from-code=iso-8859-1 after --keyword=N_.

Next, please make sure you have GNU sed installed, and make sure it's in the path searched before the stock /usr/bin/sed, which hangs on processing po/tar.pot file because of the alien character set.

Then, the configure script suffers a problem where .dSYM extension causes gcc to think that it's building a bundle. But it is not. It prevents configure script to detect many functions as present, which makes use of the lib/ compatibility functions. This leads to some catastrophic results (prototype not entirely compatible, conflicting qualifiers, etc.). The fix is to add ac_cv_exeext='' before the configure invocation:
ac_cv_exeext='' configure --prefix=... --program-transform-name=s,tar,hfstar,
And the usual make, make install should work.

Tuesday, May 6, 2008

Where to find real-time Linux kernels...

... for the distros that I use.
  • Ubuntu Hardy: the rt flavor is based on Ubuntu kernel (not vanilla) with preempt and real-time patches and appropriate config options. It should work for any Ubuntu derivatives.
  • CentOS 5 (provided by PlanetCCRMA): again, they seem to use a CentOS kernel with the preempt and real-time patches and appropriate config options. Works for BU Linux.
That's all for now.

Tuesday, March 4, 2008

Ubuntu 7.10 on Powerbook G4


To boot from the livecd, hold the key c until the screen turns black and you get the "boot:" prompt. Type the boldfaced text below:
boot: live-nosplash-powerpc break=top
and hit enter. After a few seconds, you reach another "(initramfs)" prompt, where you type in:
(initramfs) modprobe ide-core; exit
and hit enter. If you don't do this, it hangs with a cursor on the top left corner of the screen.

Now, proceed with the installation process as usual. When the computer reboots after installation, wait until you get the "boot:" prompt again. This time, type in
boot: Linux break=top
hit enter, and do the same initramfs drill as above.

In order to avoid doing the initramfs drill everytime you boot, add a line to /etc/initramfs-tools/modules that says "ide-core", and then run "update-initramfs -u", i.e.
$ sudo sh -c 'echo ide-core >> /etc/initramfs-tools/modules'
$ sudo update-initramfs -u
The next time it boots, it still appears to hang, but you can hear the hard drive working. After a while, you'll see the Ubuntu login screen.

Pending problems
  • Compiz-fusion doesn't appear to work because my Radeon 7500 supports only 1024x1024 texture. Tried to run compiz using the command:
    $ SKIP_CHECKS=yes compiz
    but I get the desktop background chopped off at the 1024 width boundary. Haven't tried the xorg.conf virtual hack.

Thursday, February 28, 2008

libtool 1.5 bug?

I was trying to compile gtk+-2.6.10 using Fink, on Mac OS X 10.5.2 with a new MacBook Pro. It stopped at the following error (I broke the lines to make it fit in 80 columns):
/bin/sh ../libtool --mode=link gcc  -O3 -funroll-loops -fstrict-aliasing -pipe -
Wall   -o  -version-info 600:10:600 -export-dynamic -rpath /sw
/lib  -export-symbols-regex "^[^_].*" gdk.lo gdkcolor.lo gdkcursor.lo gdkdisplay
.lo gdkdnd.lo gdkdraw.lo gdkevents.lo gdkfont.lo gdkgc.lo gdkglobals.lo gdkkeys.
lo gdkkeyuni.lo gdkimage.lo gdkdisplaymanager.lo gdkpango.lo gdkpixbuf-drawable.
lo gdkpixbuf-render.lo gdkpixmap.lo gdkpolyreg-generic.lo gdkrgb.lo gdkrectangle
.lo gdkregion-generic.lo gdkscreen.lo gdkselection.lo gdkvisual.lo gdkwindow.lo
gdkenumtypes.lo x11/ -L/usr/X11/lib -lXrandr -lXrender -lXinerama -
lXext  -L/usr/X11/lib -lXfixes   -L/usr/X11/lib -lXcursor    -Wl,-framework,Core
Services,-framework,ApplicationServices -L/sw/lib -L/usr/X11/lib -lpangoxft-1.0
-lXft -lXrender -lpangoft2-1.0 -lfontconfig -lfreetype -lz -lpangox-1.0 -lX11 -l
pango-1.0 -lm -lgobject-2.0 -lgmodule-2.0 -lglib-2.0 -lintl -liconv    -lm ../gd
k-pixbuf/ -lintl
libtool: link: warning: `/sw/lib//' seems to be moved
rm -fr  .libs/libgdk-x11-2.0.exp .libs/libgdk-x11-2.0.lax
generating symbol list for `'
/usr/bin/nm -p  .libs/gdk.o .libs/gdkcolor.o .libs/gdkcursor.o .libs/gdkdisplay.
o .libs/gdkdnd.o .libs/gdkdraw.o .libs/gdkevents.o .libs/gdkfont.o .libs/gdkgc.o
.libs/gdkglobals.o .libs/gdkkeys.o .libs/gdkkeyuni.o .libs/gdkimage.o .libs/gdk
displaymanager.o .libs/gdkpango.o .libs/gdkpixbuf-drawable.o .libs/gdkpixbuf-ren
der.o .libs/gdkpixmap.o .libs/gdkpolyreg-generic.o .libs/gdkrgb.o .libs/gdkrecta
ngle.o .libs/gdkregion-generic.o .libs/gdkscreen.o .libs/gdkselection.o .libs/gd
kvisual.o .libs/gdkwindow.o .libs/gdkenumtypes.o  x11/.libs/libgdk-x11.a | sed -
n -e 's/^.*[  ]\([BCDEGRST][BCDEGRST]*\)[  ][  ]*_\([_A-Za-z][_A-Za-z0-
9]*\)$/\1 _\2 \2/p' | /usr/bin/sed 's/.* //' | sort | uniq > .libs/libgdk-x11-2.
grep -E -e "^[^_].*" ".libs/libgdk-x11-2.0.exp" > ".libs/libgdk-x11-2.0.expT"
mv -f ".libs/libgdk-x11-2.0.expT" ".libs/libgdk-x11-2.0.exp"
rm -fr .libs/libgdk-x11-2.0.lax
mkdir .libs/libgdk-x11-2.0.lax
rm -fr .libs/libgdk-x11-2.0.lax/libgdk-x11.a
mkdir .libs/libgdk-x11-2.0.lax/libgdk-x11.a
Extracting /sw/src/
(cd .libs/libgdk-x11-2.0.lax/libgdk-x11.a && ar x /sw/src/
sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < .libs/libgdk-x11-2.0.exp >
gcc -dynamiclib ${wl}-undefined ${wl}dynamic_lookup -o .libs/libgdk-x11-
0.10.dylib  .libs/gdk.o .libs/gdkcolor.o .libs/gdkcursor.o .libs/gdkdisplay.o .l
ibs/gdkdnd.o .libs/gdkdraw.o .libs/gdkevents.o .libs/gdkfont.o .libs/gdkgc.o .li
bs/gdkglobals.o .libs/gdkkeys.o .libs/gdkkeyuni.o .libs/gdkimage.o .libs/gdkdisp
laymanager.o .libs/gdkpango.o .libs/gdkpixbuf-drawable.o .libs/gdkpixbuf-render.
o .libs/gdkpixmap.o .libs/gdkpolyreg-generic.o .libs/gdkrgb.o .libs/gdkrectangle
.o .libs/gdkregion-generic.o .libs/gdkscreen.o .libs/gdkselection.o .libs/gdkvis
ual.o .libs/gdkwindow.o .libs/gdkenumtypes.o  .libs/libgdk-x11-2.0.lax/libgdk-x1
1.a/gdkasync.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkcolor-x11.o .libs/libgdk
-x11-2.0.lax/libgdk-x11.a/gdkcursor-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/
gdkdisplay-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkdnd-x11.o .libs/libgdk
-x11-2.0.lax/libgdk-x11.a/gdkdrawable-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.
a/gdkevents-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkfont-x11.o .libs/libg
dk-x11-2.0.lax/libgdk-x11.a/gdkgc-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gd
kgeometry-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkglobals-x11.o .libs/lib
gdk-x11-2.0.lax/libgdk-x11.a/gdkim-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/g
dkimage-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkinput-none.o .libs/libgdk
-x11-2.0.lax/libgdk-x11.a/gdkinput.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkke
ys-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkmain-x11.o .libs/libgdk-x11-2.
0.lax/libgdk-x11.a/gdkpango-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkpixma
p-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkproperty-x11.o .libs/libgdk-x11
-2.0.lax/libgdk-x11.a/gdkscreen-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdks
election-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkspawn-x11.o .libs/libgdk
-x11-2.0.lax/libgdk-x11.a/gdkvisual-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/
gdkwindow-x11.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/gdkxid.o .libs/libgdk-x11-
2.0.lax/libgdk-x11.a/xsettings-client.o .libs/libgdk-x11-2.0.lax/libgdk-x11.a/xs
ettings-common.o   -L/sw/lib -lc -L/usr/X11/lib /usr/X11/lib/libXrandr.2.0.0.dyl
ib /usr/X11/lib/libXau.6.0.0.dylib /usr/X11/lib/libXdmcp.6.0.0.dylib /usr/X11/li
b/libXinerama.1.0.0.dylib /usr/X11/lib/libXext.6.4.0.dylib /usr/X11/lib/libXfixe
s.3.1.0.dylib /usr/X11/lib/libXcursor.1.0.2.dylib /sw/lib/libpangoxft-1.0.dylib
-L/usr/X11R6/lib /usr/lib/libexpat.dylib /usr/lib/libz.dylib /usr/lib/libc.dylib
/usr/X11/lib/libXft.2.1.2.dylib /usr/X11/lib/libXrender.1.3.0.dylib /sw/lib/lib
pangoft2-1.0.dylib /usr/lib/libm.dylib /usr/X11/lib/libfontconfig.dylib /usr/X11
/lib/libfreetype.dylib -lz /sw/lib/libpangox-1.0.dylib /usr/X11/lib/libX11.6.2.0
.dylib /sw/lib/libpango-1.0.dylib /sw/lib/libgobject-2.0.dylib /sw/lib/libgmodul
e-2.0.dylib /sw/lib/libglib-2.0.dylib /sw/lib/libiconv.dylib -lm ../gdk-pixbuf/.
libs/libgdk_pixbuf-2.0.dylib /sw/lib//libintl.dylib /sw/lib/libintl.dylib  -Wl,-
framework -Wl,CoreServices -Wl,-framework -Wl,ApplicationServices -install_name
/sw/lib/libgdk-x11-2.0.0.dylib -Wl,-compatibility_version -Wl,601 -Wl,-current_
version -Wl,601.10
i686-apple-darwin9-gcc-4.0.1: /usr/X11/lib/libXrandr.2.0.0.dylib: No such file o
r directory
It appears the culprit is that libtool assumes all minor version numbers of dynamic libraries to be zero. But it is not so.
$ ls -l /usr/X11/lib/libXrandr.*
lrwxr-xr-x 1 root wheel     17 2008-02-14 09:48 /usr/X11/lib/libXrandr.2.1.0.dylib -> libXrandr.2.dylib
-rwxr-xr-x 1 root wheel 164144 2008-01-14 00:35 /usr/X11/lib/libXrandr.2.dylib
lrwxr-xr-x 1 root wheel     17 2008-02-14 09:48 /usr/X11/lib/libXrandr.dylib -> libXrandr.2.dylib
-rwxr-xr-x 1 root wheel    955 2007-09-09 01:34 /usr/X11/lib/
This is the version of libtool I'm using:
$ dpkg -l libtool*
| Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed
|/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad)
||/ Name           Version        Description
un  libtool                 (no description available)
ii  libtool14      1.5.26-1       Shared library build helper, v1.5
ii  libtool14-shli 1.5.26-1       Shared libraries for libtool, v1.5
A workaround is to create the symbolic link:
$ cd /usr/X11/lib
$ sudo ln -s libXrandr.2.dylib libXrandr.2.0.0.dylib
And this hack should satisfy libtool for now. This is not a bug in the build script of gtk+-2.6.10 in fink, nor the Makefile though. The failed gcc command was generated automatically by libtool.

Tuesday, February 26, 2008

Studying s211nup1.bin

Using a hex editor to look at the firmware after extracting it from the downloaded zip file, the first 0x60 (96) bytes contain some sort of firmware header. The rest of the firmware is a binary that runs on something that looks like a 32-bit RISC machine. Further examination reveals that it runs on a little Endian ARM processor because of the prevalence of the 0xe??????? instructions. It is a unique feature of the ARM: each instruction codes the condition to execute in the 4 highest bits that can make the code really compact. The condition e? means "always."

There is another unique feature about ARM. Since each instruction is exactly 32-bit wide, there is no one instruction that loads a whole 32-bit constant. Compiler often loads constant from memory relative to the program counter, and then leaps over the constant pool later (which is non-sensible instruction). This behavior is observed from the firmware as well.

The firmware initializes sp to 0x44000000. If the code is loaded at 0x40000000 like standard practice, then it means the hardware expects 64MB of ram. There also seems to be a memory mapped I/O block in the upper memory area around 0xffff????.

The next step is to look at the strings literals in the firmware and see which code references a given string. Since string literals are intertwined with code, this could help establishing the memory mapping for where the code should be loaded.

Sunday, February 24, 2008

Review of Sanyo VPC-HD1000

I own the Sanyo VPC-HD1000 camcorder since November 2007. Having made about 32GB, or 6 hours of videos, I think I can say a little bit about this camcorder. I'll start by saying that all the reviews you read online are still true, most of the pros and cons. I'm not going to repeat what they said. I'm just going to nitpick a bit more. All these remarks apply to the latest firmware update in 11/21/07 as well as factory firmware.

I'll start with things that I like:
  • EIS (electronic image stablization) does not reduce field of view, contrary to what some other website claims. The image area on the CMOS used for video recording is about 80% width of the image area used for taking still photos (I realize that EIS is available for still photos, and that does not reduce field of view either). This leaves plenty of image area for EIS and does not reduce either field of view or image quality. In fact, EIS improves image quality quite a bit because the H.264 codec does not have to deal with all that shaking. See below about H.264 motion artifact.

    I find that I simply can't do without EIS. It is pretty much a requirement if you want a steady shot. I tried mounting the camcorder on a folded tripod, which helped stablizing the image but with limited success. Maybe one of these DIY steadicam can help, but I haven't tried.
  • The resolution is very good, good enough to introduce aliasing when scaling the video by half, i.e. from 1920x1080 to 960x540. From casual eyeballing, I'd say 720 lines is about the native resolution of this camcorder, so you get the most out of the HDHR mode, or 1280x720 at 60p. Note: both 1080i and 720p modes have the same field of view.

    I imagine if you want 1080@60i, you get the best result by upsampling 720@60p in post-production. I haven't tried it though.
  • Very good low light performance. ISO 400 is good enough to shoot all well-lit city streets at night without additional lighting (to a point that the street lights raise my concern about possible light pollution problem). Some people aren't happy with the low light performance because they expect a small lightbulb can illuminate a scene like daylight. This is not going to happen for any camera. This is why professional filmmakers use ridiculous lighting equipment.
And the things I don't like:
  • Cannot change manual focus during recording. Actually, you can't change aperture while recording as well, but not being able to change the focus annoys me the most. You can't change ISO and shutter speed during recording either, but you shouldn't need to do that.

    Solution: try one of the auto-focus settings, and both 1-point and 9-point autofocus to see which one applies best to the scene. I don't find aperture to be a big deal because I almost always fix the shutter speed and ISO setting and let the camcorder figure out the right aperture. Other times I fix the aperture as well for the entire take.

    If you have some money and you like to play around, try one of those 35mm depth of field adapters. You use a manual focus lens with the adapter, and you can achieve some amazing effects even with a consumer camcorder. I haven't tried it myself but I'm eagar to find out if it works with Sanyo VPC-HD1000. The adapter is going to be larger than the camera itself, though.
  • Viewfinder preview during standby is not always accurate if you rely on manual ISO, shutter speed, and aperture settings that would change the overall exposure. It always looks "okay" on the preview, but the scene could be drastically over- or under- exposed by surprise.

    Solution: since I use shutter-priority mode, the lower left corner of the screen tells me which aperture would be used. If it hits F/1.8, then there might be risk for underexposure. If it hits F/8.0, then there is risk for overexposure. You just have to tell by experience.

    Otherwise, just hit record and take a test shoot for a few seconds. That always works.
  • Heavy artifact when the scene has a lot of motion, such as looking into a forest on a train ride. There is also sample of flowing water that causes severe artifact. In particular, I blame that the camcorder only records at 12Mbps video in both FULLHD and HDHR mode.

    Solution: no solution as far as I can tell, but you can plan the scene to have less motion, or upgrade to a more expensive camcorder if you feel your artistic license is violated.
  • Recording time limit. The camcorder formats a SDHC card in FAT32 filesystem, and there is a 4GB file size, or 45 minutes at 12Mbps for FULLHD and HDHR modes. Furthermore, the recording just stops rather than seamlessly roll-over to a new file.

    Solution: no solution as far as I can tell. Using an 8GB SDHC memory wouldn't help. Furthermore, the camcorder can only support up to 8GB cards. With 16GB cards becoming very affordable nowadays, the ability for extended continuous recording time is now limited only by firmware.
  • Noisy microphone preamp. You can hear a lot of white noise by plugging in an external mic but having it turned off. There is very little noise at level 1, but the noise becomes painfully audible as you approach volume level 5.

    Solution: keep the external mic volume at level 1. However, see the next caveat.
  • Rattling sound introduced by AAC codec. When implemented correctly, AAC is usually very good at compressing audio, but the firmware may have a bad implementation. This is audible if you listen intently during a quiet scene. It only becomes a problem if you use a good quality external mic with very low noise floor, since the built-in mic never gives you a quiet scene. I blame this on the low 128kbps bitrate and the firmware's implementation of AAC. Higher bitrate could compensate for a bad implementation. I mean, 128kbps is peanuts compared to the 12Mbps video. I think Sanyo should allow at least 256kbps of audio.

    Solution: raise the external mic volume to introduce artificial white noise. You have to decide which is more annoying: white noise or the rattle.
All in all, the problems I find the most annoying with no viable solution is that the camcorder did not use MPEG-4 H.264/AAC to their full potential. The second most annoying problem is the limit in file size to 4GB (or 45 minutes recording time) and the lack of support for 16GB SDHC cards. These are all firmware problems as far as I can tell.

Wednesday, February 6, 2008

Faux Anti-Aliasing

The idea is to take a piece of scanned text in black and white (after thresholding), convert to greyscale, and draw short anti-aliased lines between two edge pixels to smoothen the appearance of text. Unlike unsharpening, we avoid blurring the horizontal and vertical edges.