Monday, August 20, 2012

Brainfuck Interpreter 2.0

Some time ago I wrote a little Brainfuck interpreter with my own ProgramOptions implementation and I guess kind of bad code design...

So today I got a bit bored and had a look at Boost.Spirit.

I wrote a completely new Brainfuck interpreter which happens also to work a bit better than my old one... The old one had problems with loops somehow...
Both versions can be found here:
http://www-user.tu-chemnitz.de/~bytow/

I was not able to test the new interpreter on windows but I will do that asap.
Also I hope my style improved since the first version :-)
So feel free to comment and criticize ;-)

Sunday, July 8, 2012

Bastard Operator From Hell - Kernel Module

Recently I started getting my self used to the linux source code.
So today I thought I should simply start by writing a little "hello world" kernel module.
But as "hello world" is way too easy and simply not sufficient to test certain functionalities I decided to implement kmod_bofh (that's what I named it :)).

So what does kmod_bofh (or bofh.ko as the binary is named) do?
It creates a pseudo file /proc/excuse which when read always returns a random BOFH excuse.
Thus you can run "cat /proc/excuse" to retrieve a BOFH excuse when you need it ;-)

For those who do not know what BOFH is...
Read this: http://bofh.ntk.net/BOFH/index.php

I compiled and linked my source against the current kernel 3.5-r6 (from linus' git repository).
You can find the source code here:
http://www-user.tu-chemnitz.de/~bytow/kmod_bofh-src.tar.gz

Tuesday, August 23, 2011

Huffman Compressor

Just wrote a little compression tool.

It uses the huffman algorithm to calculate a prefix code tree,
which is used to compress the input file.

The format of a compressed file looks as follows:

[signature: {'H', 'C', 'F', '\0'}] (HCF = Huffman Compressed File)
[content size] (a 64 bit unsigned integer containing the size of the original file)
[huffman tree] (here the tree is serialized... for more information have a look at huffman_tree.c)
[content] (finally we have the compressed content)

The quality of the compression depends heavily on the content of the input file.
A rather big text file can be compressed very well,
while large binary files may nearly keep their size.

This compression algorithm is not recommended for very small files
as such files might even become bigger.

The program source (GPLv3) can be found here: http://www-user.tu-chemnitz.de/~bytow/
The package also contains the Windows binaries of the program.