Ugh. I've always hated the word 'blog'. In any case, this is a chronologically ordered selection of my ramblings.

Crowd-sourcing and the Wisdom of Crowds

The Wisdom of Crowds makes some good points about how and when a large number of people can do things better than experts. Crowdsourcing often refers to splitting work across a large number of people. However, what I occasionally see is a "crowd-based" problem-solving approach which is... not really.

For example, if you have a difficult problem, you might shove it out to the crowd. A fan of crowds might say the Netflix Prize was solved that way. However, in that sort of case, it isn't the crowd solving it, it's some bright people. You've just given the problem to some bright people, and a bunch of normal people.

In other words, crowd-sourcing like this is not really about problem-solving, it's about problem-routing - effectively packet-switching problems towards the appropriate experts, even if you don't know who they are. You feed the problem into the social graph and hope it is forwarded to an appropriate destination.

Sometimes, a crowd really is only as wise as its wisest members, but in a large crowd that can still be pretty darn wise.

Posted 2015-05-21.


The Internet is Street Art

I found this note in a file simply saying "The Internet is Street Art". I vaguely remember it from a couple of years ago when I went to Covent Garden, and saw some street performance.

My understanding of street performance, gleaned from limited experience of places like Covent Garden and the Edinburgh Fringe, is that you get in the way of people and show off. Well, it's more than that, it's moderate talent, the ability to manipulate a crowd to create a "people join the crowd to see what the crowd's for" effect, and a willingness to get in the way of those not (yet) interested. The cost of entry as a spectator is nothing, except your attention, and you're probably underestimating the value of that.

And in so many ways, this is the web. It's basically about going viral and getting a crowd. Quality is not the ultimate concern, and the only price is free. And fundamentally, the idea is to get in the way of passers-by.

I don't really like street performance. *sigh*

Posted 2015-05-21.


Linux 1.0 kernel source reading - 7: fs/exec.c and friends

For my first foray into the "fs" directory, I thought I'd try something relatively loosely coupled to the file system, and more to the rest of the kernel I've been reading: The "exec" infrastructure.

exec.c starts with various utility functions whose context will have to arrive later. Bottom-up code isn't always the easiest to understand!

It's kind of strange that "flush_old_exec" doesn't explicitly share code with "sys_exit", as they're both freeing a number of process resources. It's not exactly "Don't Repeat Yourself". (If I were designing this, in a slightly more high-level, OO and RAII kind of way, I'd separate out the structures representing the bits of process that survivce across an "exec" call, and those that don't. On "exec", I'd tear down one structure, on "exit" both.)

Mostly, though, this is straightforward code, and shows how simple the a.out format is!

binfmt_elf.c ELF is more complicated, shock. Probably best read with a copy of the ELF standard. Instead, I'll skim. "/* Now use mmap to map the library into memory. */" comment appears to be in the wrong place, after the work is done!

The interpreter-loading stuff was, unsurprisingly, a bit obscure until I read up on PT_INTERP. It still feels odd having a.out-loading code in the middle of the ELF code, and does rather seem as if there's redundancy in the code.

There are lots of long functions. It might be nice to break them into short functions with well-defined purposes. The overloaded returning of error codes and success values is a bit painful, although I'm not sure there's a pretty alternative in vanilla C.

binfmt_coff.cis written by some other person. Heavily commented, which is nice, but the comments starting in column 0 inside functions make it difficult to see where functions start and end! It refers to "coff.h", which demonstrates a consistent inability to spell "O'Reilly". "load_object" is a 450 line function? Really?

It looks like there's a bug in reading the sections - if there's a badly-aligned section, then a good one, the status code is overwritten and it'll ignore the bad alignment, I think. It probably checks again later, though. There are odd error messages on strange section counts, too, and the continual conditioning on status suggests that "goto"-based error handling may not be that bad after all!

Having said all that, the heavy commenting makes this file pretty readable.

Library-loading is much easier for COFF than ELF. If a COFF header asks to pull in a shared library, rather than execute a loader, it just uses "sys_uselib". This is presumably rather less flexible than the ELF approach, but makes for much simpler code!

Bonus 1: filesystems.c is simply a (ifdef-configured) registry of the known filesystem types.

Bonus 2: file_table.c manages the circular list of all files. For some reason, it mixes in unused file entries with the allocated ones, and generally seems slightly naff code if you ask me. Maybe reading other source will make this clearer?

Posted 2015-05-20.


Linux 1.0 kernel source reading - 6: IBCS and IPC

"IBCS" turns out to be nothing more than a "NYI" stub. And after all the iBCS compatibility stuff in signal handling!

ipc/util.c is the main entry point for sysv ipc. There's a single syscall with a switch statement. sysv ipc support is configurable, so there's an ifdef to switch to dummy code for the disbled case.

ipc/sem.c handles semaphores. The need for sem_lock and IPC_NOID seems odd to me, why not kmalloc first before fiddling a shared structure? "sys_semctl" is ugly (sequence of switches on the same command is odd). The code doesn't quite feel the usual standard of the rest of kernel.

ipc/msg.c - message queues. The business of rechecking the entry's id in "sys_msgsend" makes me nervous. It feels race-condition-like, although it probably really is a valid race condition that meets the expected interface. Much of the code is copy and paste of sem.c. Not quite sure if this is repetition is bad style or just the faff of doing this in C.

ipc/shm.c - shared memory. Again, some shared infrastructure. "shm_map" runs off the actual page table, rather than the mappings structure, which seems inefficient. It also maps in the new top-level pages while checking the existing mappings. If shm_map fails due to a clash, you'll have a pile of empty top-level tables, which is mildly naff.

One mildly tricky (and uncommented) bit of code: As far as I can tell, swapping shared memory works by getting the reference count down to 1, which represents the reference count from the shared memory object itself (the other references being from it being mapped into processes). At that point, the page can be swapped.

Overall, I was a little underwhelmed by the quality of the IPC code...

Posted 2015-05-16.


Learning modern web technology

It occurred to me that if I am to work for a web firm I should really know some web technology. My knowledge of the web was extremely out-of-date and patchy, so I thought it time to make it up-to-date and patchy. The aim wasn't to learn all the latest trendy technologies, but more what's been the backbone for a while.

The web used to be a horrible mess, but with the triumvirate HTML5, CSS and Javscript, plus modern browsers, things are a lot neater and more pleasant. To learn, I did a pile of courses on codecademy. This was rather neat as I didn't only learn a bit of modern HTML, CSS and JS, but I also got pointed at Boostrap and jQuery. I guess you can say technology is mature when it's got helper libraries on top of it! I understood the existence of JS helpers, but CSS helpers were a bit of a surprise.

Anyway, the existence of these libraries explained something of a mystery to me: Why so many websites have such similarity in style, beyond the demands of fashion. Well, they're all using the same libraries. Also, I got to find out that the basic use of jQuery is not to do with querying, but for putting animations and effects on your web pages.

It also reminds me how conventional web pages are looking now. I remember "The web is a new medium! The old rules don't apply!". And now, the web looks like magazines. Ho hum. Anyway, my website's been updated to look like all the others, using the same libraries as everyone else. It's good practice, right?

One of the things I found interesting about Javascript was quite how heavily anonymous functions are used for callbacks and the like. It's almost like continuation-passing. On the one hand, it's interesting to see bits of functional programming becoming quite so mainstream. On the other side, it shows how clunky things can be without the right abstractions. Partial application is a pain, and you can't e.g. use monads or whatever to abstract away the sequencing of callbacks.

Finally, I picked up AngularJS. I must admit, I feel somewhat ambivalent about it. For dynamic data, you've got to do things on the client-side. However, it seems to go a bit far - the codecademy tutorials encourage you to do things on the client-side that could have been done on the server-side. This seems inefficient and an unpleasant move away from the underlying document-centric view of the web. On the other hand, it's good for creating dynamic sites quickly and flexibly, without having to build server-side and client-side frameworks in tandem.

I can see this kind of mindset leading to painful mobile apps. As you can make a fairly fully-featured UI in a web browser, complete with JS implementation, why not just use that for the backbone of your app? Unfortunately, mobile phones don't have the spec of desktops. Well... I guess Moore's law is going to sort my objection out sooner or later, and you can have pretty and fast apps on your phone, at the expense of battery life.

Mostly, though, I can see the use in Angular. With modern hardware and the kind of web apps people are developing, efficient technology is beaten by RAD technology 95% of the time.

Posted 2015-05-08.


Julia sets with continuous deformation

I've been playing with HTML5 canvases and Javascript to create an in-browser Mandelbrot/Julia set viewer which can show the Julia set iteration function as a continuous deformation. It's something I've been thinking of for a while, but it's rather nice to implement it!

Without further ado here is my Mandelbrot/Julia set toy.

Posted 2015-05-07.


Getting rid of some books

I love books. I very rarely get rid of books. I have actually kept the worst books I've read, as a reminder of how poor they were. So, it's unusual for me to get rid of books, but now I'm getting rid of a few. To mark the occasion and remind me of them, I'm semi-reviewing them!

O'Reilly Make There was a point when I'd assume that any O'Reilly book on a subject was the bees' knees. Those were the days when standard utilities weren't documented every which way all over the internet. You might have a man page or info files for the GNU version, but it wasn't particularly approachable. Certainly no Stack Overflow. So, I bought a book on "make". It was mediocre at the time, and useless now.

O'Reilly Lex and Yacc Pretty much the same story.

Java in a Nutshell On the other hand, this was pretty useful. A quick summary of the Java libraries that could be flicked through quickly was great - online docs tended to be long-winded and difficult to search. Unfortunately, this book only goes to Java 1.1. As the language has grown so much, I expect the latest nutshell might not live up to its name.

Unix in a Nutshell Bought somewhat later, this was not so useful. Unix documentation is pretty thorough, and nutshell-like anyway, so this has less value. There are summaries of things like sh, sed and awk, but that's superseded by the O'Reilly shell scripting book I bought more recently.

Progamming Perl This O'Reilly book is generally deemed to be a classic. I never got into it. My review gives the reasons, but basically Perl is just not the way I think, and now that Perl has lost its favoured status, I'm glad to be rid of it.

A Guide to LaTeX (Kopka and Daly) A not great guide to a not great system. Overall, the output of LaTeX is great, but it really does demonstrate what happens if you fail to separate the mark-up from presentation information from the start, and moreover if (at the TeX level) you create a general-purpose programming language without admitting it and making a proper language of it. I just feel the book really reflects the underlying system, rather than being a disaster in itself. If I'm not writing a proper paper, it's Markdown and MathJax for me right now.

Generic Programming and the STL (Austern) Way back when (PhD days, or before?), I wanted to understand the STL, so I chose this book. Bad choice. The subtitle is "Using and Extending the C++ Standard Template Library", and the key word is Extending. This book is much more about the interfaces and contracts provided by the library, rather than normal usage. For example, the actual containers get 70 pages of documentation for 15 classes, in a 500 page book, right at the back. Perhaps good for language lawyering and actually creating STL extension libraries, but not much use to me otherwise.

The Java Tutorial I previously reviewed this book, and, well, I've read it and I don't think it holds much long-term value for me now.

The Nitpicker's Guide for Classic Trekkers This is a detailed episode guide to the original series of Star Trek. However, over time it has become obvious to me that I fundamentally don't care much for the original series.

Roget's English Thesaurus If the internet breaks, synonyms will not be my greatest worry.

Reverse Dictionary Interestingly enough, this is pretty much a synonym for "thesaurus".

The Bloke's Guide to Pregnancy Reviewed here. Not planning to go through that again! In this area, I would recommend "The Rough Guide to Pregnancy and Birth", followed by "What to Expect - the first year".

Posted 2015-04-28.


Linux 1.0 kernel source reading - 5: Memory management

Over to the "mm/" directory.

kmalloc.c: Hmmmm. Something else that implements kmalloc/kfree. Careful reading of lib/Makefile reveals that lib/malloc.c is not actually compiled, so I guess this is used instead!

I prefer the other implementation. This one has crazy indents and fewer comments. Also, it puts the control block in the page with the data, which means allocation blocks of powers of two don't fit in neatly, and you can't actually allocate large chunks efficiently.

Oh, and it allows a pointless race in the code to allocate from a new page: If there are no blocks available, it constructs a new page, adds it to the list, and then races against anything else to take an item out of that list, complete with code to give up if it loses the race too often. More simply, it could just allocate the chunk from the newly constructed page before hooking the remainder into the pool. Naff design.

And there are other minor stylistic things I don't like.

swap.c: _get_free_page lives here, so it seems a good place to start. In the standard style of C, it's written in a bottom-up fashion, and there isn't high-level explanation comments, so it's a bit of an exercise to work out what it intends to do as we go along.

The basic idea at start of file is to treat swapped-out pages as a specialised resource, with functions to to claim, free, duplicate (increase ref count), etc.

It looks like swapped-out pages get duplicated when they're swapped in... but the pages that actually get paged out are ones that are not already backed by a file, so the paging back in effectively just does a copy-on-write-like step as the page is pulled in.

The next part of the file moves on to the mechanism for trying to swap out pages (which calls into other areas, mildly breaking the layering of abstractions). Then later parts of the file deal with freeing pages (free_page), and getting a page (__get_free_page), kicking off some swapping if necessary. The file concludes with the turning on and off of swap devices and swap files.

memory.c: This file is a chunky one, and the next layer up the abstraction stack. Rather than dealing with the allocation of individual pages (and swapping), it deals with page tables (modulo swapping) and the mapping of ranges of VM.

The "oom" function is commented to send an "untrappable SIGSEGV". The code seems to send a SIGKILL. For some reason, this amused me.

Various functions have heavily duplicate code - "clear_page_tables" vs. "free_page_tables". This reveals the pain of working in a low-level language like C (I'm currently reading On Lisp).

"share_page", to find a page for a VM mapping that can be shared, looks hideously expensive, and the wrong way to do things. VM mappings should know where their pages are, not have to scrape processes! Sharing with buffer cache seems a more pleasant alternative, too (and is perhaps how you should really do this look-up).

The end of this file deals with the page fault handling mechanism, and the basic VM ops structure for a file memory mapping.

mmap.c: This builds memory mapping infrastructure on the above. Mostly works on the VM area linked list, so it's not clear how this is kept in sync with the actual page tables. e.g. munmap partially unmapping an area doesn't actually necessarily update the page table, so partially unmapped areas may actually remain accessible in the address space. Or I could be misreading the code. The fun of maintaining invariants in a low-level language!

vmalloc.c: This one works a little more independently. Its purpose is not explained, but it is evidently to do multi-page kernel memory allocations. It does this by finding free physical pages and assigning virtual addresses to create the needed contiguous memory in virtual address space. It stops physical page fragmentation from being a problem for large kernel allocations. Rather obviously, it assumes that you have a lot more virtual address space than physical RAM, which was certainly the case back in the day. It creates the kernel memory mapping by updating the kernel memory page tables for each process, which seems an inefficient but simple way to do it.

Theoretically, I think I've covered the real core of the system now. Realistically, given "everything is a file" (more or less), I need to read the generic parts of "fs/" to claim I've read the core parts. For example, "exec" is in fs/. Next up, I'll take a brief detour to ipc/ and ibcs/, and then time to tackle fs/.

Posted 2015-04-19.


Recreating the NTK 31337/Elite t-shirt

One of my favourite t-shirts is the old NTK Elite t-shirt. My wife has pointed out it's a bit grubby, and needs replacing. I thought she might just be jealous of a t-shirt I've known longer than her, but I think she does have something of a point.

Having recently experimented successfully with custom t-shirt printing, I thought I could create myself a replacement. And that's what I did. A pixel-based design is fairly easy to copy, although getting the measurements of the wide-open spaces was something of an estimation. I tried to keep in the infelicities of the original design, keeping it as close to the original as possible.

One thing I hadn't realised originally is that the main screen is a simple 256 pixels across, but the strap-line at the bottom isn't on the same resolution (which can be checked simply - there are more than 32 8-pixel letters). In order to keep the whole design as a simple low-resolution image, I rejigged the bottom, squashing a few spaces to fit into a 256-pixel-wide image, and I must admit I'm pretty happy with the result.

The resulting t-shirt, next to the original, for comparison, looks like this:

Overall, I think it looks pretty good. The placement is about right, the colour of the shirt is fine. The image isn't quite as wide as the original. I thought I adjusted the size to match what I measured from the original... perhaps it was slimmed to fit into the bounds of their printing mechanism? Ho hum. In any case, I'm pretty happy with it.

Posted 2015-04-13.


An insight into the mind of a small child, or communications problems

I've been clearing out stuff from my parents', as previously described, and came across a picture I painted when rather small:

I was always rather proud of myself for this painting, but apparently the non-standard colour scheme concerned my parents a little. Why is everything strange colours, when other children were using the colours rather more conventionally? Of course, they never said anything at the time. Neither, of course, did I.

I was just painting so that each colour was used for the same number of edges. In fact, I remember being annoyed as I had the counts matching perfectly, but the teacher told me I needed to paint some more things, and the addition (the swings, I think), made it very difficult to get the counts matching - I couldn't make the extra number of lines required into a multiple of four!

On second thoughts, given what I was trying to do, perhaps my parents were right to worry...

Posted 2015-04-13.


Wrapping up computers and sensible industrial design

I've just spent the weekend moving the last few bits out of my parents' house - just a few odds and ends that never got prioritised (yes, I took my time). This includes a few old computers. I wrapped them up in a bit of bubble-wrap for transport. From this experience, and the fact I want to transport them, and the fact that people often want to stack up a set of computers, reveals a useful design priciple: A computer should be able to lie safely on any face.

You'd have thought this would be obvious. A decent number of machines have their ports on the back recessed in order to meet this point. This is very useful if the ports are mounted directly on the motherboard, where excess pressure on the back can break the whole thing. BNC connectors (as seen on 10Base2, and some video connectors) are particularly bad, as they're quite deep and stick out far.

Anyway, the HP 9000/340 loses badly, with a full 4 BNC connectors sticking way out of the back of the machine, completely unprotected. *sigh*

Posted 2015-04-13.


Some silly custom t-shirts

I'll soon be moving jobs. The company I'm going to has a looser dress code than my current employer, which means I'm going to need a few more t-shirts. Custom-printed t-shirts are now really quite cheap (yay, computers!), so it gave me the opportunity to get myself a couple of custom designs. For some reason, this makes me surprisingly happy.

The first design is the character "Head" from the Spectrum game "Head Over Heels", which you may know I'm something of a fan of. I always need more monochromatic pixel art in my life. I've ended up using this as my avatar on Github and a couple of other places, since a tiny icon of some white bloke is not exactly highly-distinguishable (roll on diversity in IT, please).

The other one is something rather more obscure. You've probably seen those designer-ish t-shirts with "Helvetica" written on them in nice, friendly letters. This t-shirt is just like that, except "Helvetica" is written in Arial, the cheap Helvetica knock-off that's infiltrated everywhere. Happily, it looks a little bit naff and knock-off-y, which is just what I wanted.

The company that printed them for me has given me a money-off voucher, so I'm very tempted to do a few more...

Posted 2015-04-05.


More craft stuff: Basic sewing machine usage

So, sewing. I want to make a few bits and pieces that involve the use of a sewing machine. I still find the mechanism behind sewing machines utterly nutty, but fortunately you don't have to think about the mechanical magic while using them.

The first thing I did was try to knock up a couple of shapes to practise "invisible finishing", which I'm quite frankly rubbish at:

Bearing in mind I've tried to put the best side to the camera, you can see it's not great! I tried to construct a square (not terribly straight edges) and a tetrahedron (a slightly better shape, but the finishing in the corners is nafff.

Then, a very simple project: A keyboard bag. Given the silly amount of time I spent working on my DIY keyboard, it seems to make sense to keep it safe when transporting it. So, I constructed a keyboard bag. No invisible finishing, no fancy lining, just a piece of fabric sewn up two sides with a bunch of fiddling at the top for a draw-string:

It does the job, and I'm pretty happy with it. One thing I've learnt with this, as with most craft/making stuff is "don't make mistakes". Catching bugs early in programming is vital to good productivity. When you deal with the real world, it's even worse. Undoing physical mistakes can vary from "difficult" to "impossible", and certainly slows you down. Much easier to spend a bit of time sewing the right line than to tediously unpick a piece of bad sewing. In that respect, computers and "undo" have spoilt me.

Posted 2015-04-01.


Linux 1.0 kernel source reading - 4: The rest of kernel/

This time, I'm polishing off the rest of the "kernel/" directory. It's a short review of a few relatively long files (not long in absolute terms, fortunately).

sys.c: vm86 is messy. I'm starting to wonder if the whole threading thing being written as a co-routine or continuation-based abstraction could hide the grim, but I'm not convinced. A lot of this file consists of very boring system calls.

fork.c: Quite fun! How to duplicate all the things you need to create an identical process. Some parts, that depend upon fs and mm aren't clear yet, as I need to read the code, but the intuition is pretty clear. I see this common pattern with C code that I get twitchy about invariants being held that would be managed wonderfully with well-encapsulated objects and RAII.

signal.c: Oh look, yet another structure to hold all the register state! This is used when saving state on the stack during signals for iBCS compatibility, it seems. The manipulation of the user stack, with the use of the "sigreturn" system call at the end of the signal handler is interesting, as is the setting up of a chain of signal handlers in a single go.

exit.c: Almost a misnomer - there's plenty of code that isn't pure exit code, as it segues to exit code handling, wait, process groups, and other signal-related stuff. Again, RAII would make safe clean-up a lot easier IMHO.

ptrace.c: Handles process debugging stuff, and is a little odd and encapsulation-breaking - it manually traverses the page table, for example.

Posted 2015-04-01.


Linux 1.0 kernel source reading - 3: Kernel entry points

The bottom-up approach I was previously taking wasn't really helping me get a handle on the kernel's structure. If you squint and turn it upside-down, the kernel looks a bit like a library, admittedly one that wraps the code calling it, and with various other fancy bits. In any case, it seems that a sensible place to start is the points where we enter the kernel - the system calls, traps and interrupts. Rather than try to trace the start-up process further, we can assume the system's running, and see what it gets up to.

kernel/sys_call.S This file provides the low-level handlers/entry points, but doesn't actually install them. The handlers then call the core implementations - XXX calls through to do_XXX. The placement of the user mode segment into FS, used as described previously, is done here. "lcall7" is a mysterious name, but a little tracing shows that it's the iBCS emulation stuff.

ret_from_system_call and friends are here, and provide interesting complexity and functionality. They do a re-entrancy test, run the bottom halves, reschedule if required and post signals. There's also some virtual 8086 unpleasantness.

kernel/traps.c is the next layer up form sys_call.S, handling errosr and faults. The comment at the top refers to the non-existent asm.s. Mostly, it sends signals to the process involved, and kills things if something went wrong in kernel mode.

Most of the numbers in the kernel go through appropriately-named constants, but 'trap_no' has mysterious un-named values. A little bit of digging suggests they're related to iBCS emulation.

kernel/sched.c is, as the file comment says "the main kernel file.". It not only handles task switching and timers, but also contains "sys_call_table", so it's basically the entry point for all non-trap, non-interrupt situations. This is where we can see the "sys_*" naming convention for the naming of system call implementations. It also includes a few of the implementations themselves.

"switch_to" is the core of task-switching. This uses a funky x86 ljmp mechanism and TSS stuff I don't really want to understand. For some reason, I thought the mechanism used to save the registers when switching from user more to kernel mode might also be used when switching tasks, but apparently not. Hey ho!

This file also references a few core header files. "task.h" shows a maximum of 128 tasks. So ickle! "timer.h" deals with the timers managed in sched.c. It shows there's a transition going on between two ways of handling timers, with a nice comment explaining it all. Interetingly, timer_list is defined as a doubly-linked list and used as a singly-linked list!

Bonus feature! Having dealt with files defining the very core of Linux 1.0, I thought I'd read through a few more of the smaller and less challenging files in "kernel/". Not much to report for "info.c", "ioport.c", "itimer.c", "ldt.c" (x86-specific low-level twiddling, ick), "panic.c" and "vsprintf.c".

"module.c" and "ksyms.*" implement a basic module system (although nothing I've seen so far indicates it being used). This is quite fun. "time.c" handles the time, and reminds me how a kernel has to incorporate a bunch of really tedious stuff.

Posted 2015-03-29.


Inside a clothing security tag

Those security tags you get in clothes shops have always been mildly mysterious to me. Somehow, we ended up with a tagged item of clothing at home - looks like the shop assistant failed to remove it. Whatever alarm was supposed to go off as we left the shop either didn't, or was ignored. So, I got a hacksaw out and cut off the pin that goes through the clothing, and it all turned out fine. Then, I thought I would take a peek inside it.

I was a bit worried because I'd heard about those ink tags that stick permanent ink over eveything if you tamper with them. No such problem. Indeed, what I found was that the whole thing was nothing like tamper-proof at all. The main thing about the design, I'd say, is "cheap". It's a little strange to me that they're quite so cheap. After all, they're regularly recycled as they should never leave the shop! If they're marginally more expensive, but make theft that much more difficult, it sounds worthwhile.

Anyway, cutting off the back reveals a central bit holding the pin, plus a coil and capacitor:

The coil and cap are completely separate from everything else, forming an LC resonator for the alarms at the exits:

The rest is basically a shell to hold the pin-holding elements:

Taking apart that structure, you can see how it works:

The spring at the right pushes the small metal cap into the big metal cap, jamming the ball-bearings against the pin, as far as I can see. I'm surprised this provides so much force to make the pin so difficult to remove, but apparently it does! As far as the removal devices go, I think they're just Really Strong Magnets, since most of the chunks of metal involved are magnetic.

I guess this is just another example of looking inside a mysterious security device and finding out that the security involved is actually quite straightforward, prosaic and breakable!

Posted 2015-03-24.


Linux 1.0 kernel source reading - 2: libs and include/asm

Having read the initialisation code, I kind of understood how the kernel got itself into the right shape to get going, but not really how things ran from there. 'main.c' makes system calls as the init process, and then I kind of got stuck.

Where now? My first thought (which I must admit hasn't really worked!), was to try to dig into some of the utilities. By this, I mean the 'lib' directory, and the 'include/asm' directory - the former because some utilities/libraries sound a good bottom-up area to start with, and the latter because these headers kept getting included and causing magic to happen, so perhaps it would be good to understand what they're up to.

lib

There are basically two kinds of file here. System call stubs, and implementation of shared utility functionality. The former covers _exit.c, close.c, dup.c, execve.c, setsid.c, wait.c and write.c. These are stubs that call into the system calls (not implement them). Presumably, they were used to implement the initial processes in main.c. However, main.c now contains inline implementations, so I believe they're never used at all!

The others are generally paired with headers in include/linux. "ctype.[ch]" implements the C library functionality you'd expect (with a simple table lookup). "string.[ch]" implements standard string functions, most of the functionality coming from inlined grotty assembler in the header file. "errno.[ch]" implements the error codes (including plenty that are never used!). "malloc.[ch]" is the most complex part, but also very well-documented. It implements the kernel (small chunk) memory allocator, with a comprehensive debug mode if you need it. It sits on top of the page-level allocator.

include/asm

Some of the files really are just headers providing macro/inline functionality. Others turn out to be the header associated with a .c file that I also ended up reading...

In the former category, bitops.h provides atomic bit-twiddling, and io.h provides support for poking I/O ports. Memory-mapped I/O is clearly more pleasant! segment.h is really data-copy-to-and-from-user-space.h, which relies on the user data being mapped into the fs segment. How is fs set up like this? I don't know, so I'm obviously reading the code in the wrong order.

system.h contains 'move_to_user_mode', used by main.c. This is kind of interesting, in that it does move to user mode, but it's clearly not a user mode process, as it's still executing code from the kernel side. Presumably the 'exec' call fixes all that. This file also provides important interrupt-twiddling functionality, such as disable (cli) and endable (sti) interrupts. I didn't care for the low-level x86 details here, as I want to learn about the kernel, not x86isms.

Even though the kernel is a monolithic, single-threaded kernel, the attention that needs to be paid to race conditions is non-trivial. Interrupts can quite happily happen in kernel mode, so you end up either using atomic operations, or disabling interrupts when dealing with shared values.

The next couple of files are paired with .c files in the 'kernel' directory. 'dma.[ch]' is yet more low-level grubbing. There are fun comments about future plans now long past, and guesses about poorly understood hardware.

'irq.[ch]' are rather more core. irq.h contains macros to build up interrupt handlers. It culminates in BUILD_IRQ, a really ugly macro which builds a fast interrupt (only saves some regs), normal interrupt (all regs and may switch tasks when done) and bad interrupts (ignore). irq.c then uses the macro to define the handlers (and admits their ugliness), and has the code to initialise the interrupts and support the hooking of interrupts (which uses signal handling infrastructure).

By this point, reading the code, it's very clear that Linux 1.0 is an incredibly cheap-n-cheerful Unix clone. (It's not bad code at all, it's just not exactly fully-featured). If I were a commercial Unix vendor in the mid'90s, I'd be amused and very much not scared by it. On the other hand the rate of growth and change that happened after that is something astonishing.

Posted 2015-03-21.


Code reading: Lua 5.1.4

One of my resolutions is to read more of others' code. Especially decent open source code, as there's plenty of it and it provides a rather different view compared to work code. So, Lua seemed a pretty good choice as it's featureful, interesting and 17kloc in total. 5.1.4 is just the version I have lying around and use. Let's get started!

The code is really very compact and terse. It's written as if it's meant to be read on an 80x25 monitor. Much more squished up than the Linux source, and much less commented. I know comments can be misleading, but sometimes just trying to work out what the code is supposed to be doing is a bit painful. I think they're trying a bit too hard on the density.

Reading through the code, I can understand the general shape of what's going on, but I have very little confidence that I'm not missing bugs as I'm reading it. Unsurprisingly it's rather stateful, but it's also not clear what the invariants are, or how they are maintained. While functionality is split quite nicely across the source files, it's not well-encapsulated (C makes this a lot more difficult than C++, with objects).

To understand in depth, I think I'd need to trace the code as it runs, rather than just stare at the code.

The source is divided into the core part, and the libraries. I started with the core, which is broken down by function. Each source file is nice and short, generally a few hundred lines with the longest (the parser) being a touch over 1300 lines. I started with the periphery/base layers - the implementation of strings, tables, allocation etc. All fairly straightforward.

The chunkier parts are the VM, garbage collection, and code generation (the Lua input is converted to bytecode that the VM runs).

The Virtual Machine

The VM is stack-based, but it's not pure stack access. Each call frame has a base address, to which offsets are applied. It's rather like the Sparc moving register file. There's a separate stack for values (as just described) and calls, a bit like Haskell's STG.

Lua's "upvalues" (lexically-scoped variables) can be open or closed. If they're open, they're still live on the stack, and are represented by pointers to those values. When the item comes off the stack, they become closed, and become self-contained.

There's some weird GC handling of upvalues I don't quite understand. If an upvalue is unused, it can be resurrected if something else refers to it. I don't know if this is an optimisation, or just a way to prevent multiple upvalues pointing to the same actual value.

OP_CLOSURE's definition doesn't match what really happens. Or rather, instructions to load the values needed for the closure appear after the OP_CLOSURE instruction. This perplexed me both when reading the VM code and the code generation code!

Some of the opcodes seemed rather specialist, to deal with Lua's specific structures. Not unreasonable, I guess.

Garbage Collection

The idea of incremental garbage collection in Lua seemed mildly magic, given the hoops jumped through by languages like Java to do this kind of thing. In the end, the implementation is surprisingly pedestrian: The marking can happen incrementally, and if you put a garbage-collectable value somewhere outside the heap that's already been marked, add it to the "to process" list. Life is made a lot easier by the code being single-threaded, with the garbage collector being able to synchronously kick in at nice, safe points.

The code initially confused me with heap objects being black, white or grey. It took me a while to work out that black means marked (fully processed), white means unmarked, and grey means incompletely processed (we know it's accessible, but haven't chased through it). Obvious in retrospect, I guess. There are two whites, presumably so that we can distinguish "not yet marked in this iteration" from "should have been deleted already", and to avoid nasty mess around the transition.

Weak tables and finalisation provide a bit more fun, as usual. Ugh.

There are a few little bits of non-native-English-speaking amusement in the code. For example, the variable to hold the GC "debt" is "gcdept".

Code generation

This is split between the lexer, parser and actual code generation functions. The lexer and parser are hand-written. For this kind of thing, I certainly don't blame them. The parser is relatively long, but better-structured than the code generator, and makes the syntactically-driven code generation a bit clearer. The split of work between the parser and code generator does seem a little uneasy, though. Unclear invariants and repsonsibilities.

The code generator is mildly funky, mostly because it tries to perform peephole optimisations as it goes along. Jump handling is also a little tricky. It keeps lists of jumps that need their destinations overwriting as a linked list, linked through the jump destination field that will be overwritten. This is made more complex by the code that fills in that linked list, which has a special case not to do so if the target of the jumps is another jump (instead jump straight through to the final destination).

Similar tricks abound for handling variables and values, to make sure they end up in the right place at the right time without extraneous register moves. Boolean expressions are handled as expressions with jump pointers for the true and false cases, which I guess is a neater representation if you want to handle conditionals and short-circuiting nicely, but then there's extra conversion if you just want to treat the result as a boolean value.

Libraries

Having read through the core, the libraries seemed to make a nice wind-down coda. Mostly, it's wrappers around useful functions. "loadlib.c" is a bit interesting, as it breaks Lua's usual pretence of ANSI C, and has some quite platform-specific code. The string-matching code is... interesting, too. Clearly they want the code to be compact and simple, but the Lua regexp-alikes use back-tracking. Ugh.

Summary

The code, in itself, is readable, I guess. It never makes life easy, though. None of the above is documented explicitly, so you have to work out everything for yourself. Somewhat frustrating. I guess it's good exercise, though.

Posted 2015-03-11.


Linux 1.0 kernel source reading - 1: Initialisation

I should read more code. Working with other people and reading other people's code are good ways to learn, and I don't do enough of either. Working with other people is something I do at work, so I get some of that, but it doesn't get me looking at the code of world-famous, effective programmers. So, I should read more code.

I've wanted to read the Linux source for some time, but a modern version seems a bad place to start, with several million lines of source. The 1.0 kernel, on the other hand, has 170K lines total. Pretty practical.

It's something of a time capsule. I extracted it from my 1997 Linux Developer's Resource CDROM set (where it was already a few years old), and the feature-set is utterly different from even a few years later, let alone now. x86 only (no 'arch' directory). Single processor only. A floating-point emulator in case your CPU lacks an FPU. The number of drivers is... laughable. The design is clearly an extremely vanilla Unix clone - Tanenbaum's comments on Linux's retro design at the time seem very fair.

Looking at comments in the source, there are lines I'm reading going back 25 years, which is somewhat scary. When I first used Linux, a few years after that, aged 16, OS kernels seemed black magic - something I could compile and play with, but not truly understand. At the time, this was probably true, not just because of my inexperience, but also due to the lack of available documentation. There weren't wikis full of the minutae of x86 OS construction. Coming back at it now, it's wonderfully convenient, so I can fit it in at the end of a work day when the children are in bed.

Anyway, that's enough of the nostalgic back story. I'm reading the source. For warm-up, the FPU emulator and top-level docs, and then the boot process. My plan is to understand most things reasonably, but I'm not aiming for a fine-toothed comb on everything. I just don't have time.

drivers/FPU-emu I've also been reading a bit of Lua source. In comparison, the FPU emulator is wonderfully well-commented, even if I am skimming it. It makes it clear how much (tedious) work is required to get the details right in emulating an FPU. It also highlights the dearth of documentation available in the early '90s - the author notes they don't have access to a 486 or a copy of the IEEE spec, yet they still have a darned good go at emulating it!

CREDITS There are a decent number of recognisable names here. The most amusing is to see Raymond Chen, noted MS employee and "The Old New Thing" blogger, who contributed the Configure scripts.

Makefile Looking back, this thing is really very simple!

Configure A "little language" in the grand old tradition of Unix, it parses the "config.in" file and asks configuration questions off the back of it, and then generates the config files. Nice. As noted above, written by Raymond Chen.

config.in Wow, a seriously small number of drivers! It is warned that turning on verbase SCSI error warning will bloat your kernel by 12K!

build.c Before tackling the actual start-up process, I thought I should look at the tools used to construct the bootable image. "build.c" assembles the kernel from a boot block, a (16-bit) setup chunk, and then the main (32-bit) image. It's remarkably simple, perhaps because the files being glued together are very simple. The a.out format used makes it just a few lines of code to check the object file headers, strip them off, and glue them together into the final image.

zBoot/ The whole compressed image thing is charmingly straightforward, too. Rather than having a kernel with a base address of 0x1000, it is now based at 0x100000, and the initialisation code, er, starts at 0x1000, calls standard decompression code that'll undo gzip, and then calls into it at 0x100000. The tools that do all this are nice and simple.

inflate.c is hairy but standard (g)zip decompression implementation, so I skipped it.

boot/ Things move all over the place! "bootsect.S" gets loaded at 0x7c00, moves itself to 0x90000, loads "setup.S" after itself, and then puts the compressed system at 0x10000. "setup.s" calls a bunch of BIOS stuff, and moves the compressed system from 0x10000 to 0x1000 (obviously), before calling it in 32-bit mode. Most of the code seems to be about selecting and setting up the video mode.

At this point, we call into the "head.S" of zBoot, which decompresses the kernel, and then calls into the "head.S" of boot.

"head.S" is the final 32-bit start-up code. Turns out the initial start-up code is very similar to the "head.S" on zBoot, which isn't really that surprising. We then stash parameters down in the zero page and identify the processor. Set up paging, reload segment registers, and call "start_kernel"!

init/main.c Called by head.S. The first thing I see is lots of extern declarations in the .c file, and some shared structs being defined again. Very much not "Don't repeat yourself".

It starts everything up, moves into user mode as process 0, forks init (process 1) and enters the idle loop. How does entering user mode just work? It does a return from interrupt to the next instruction, only with the segment registers switched from kernel segments to user segments, which (for this process) are mapped to just be the same as the kernel segments, only user-readable.

"init" starts by calling the "setup" syscall. I need to find out how syscalls get dispatched! Then it tries to exec some version of init, and then there's a little bit of emergency fallback stuff. Done!

Posted 2015-02-19.


Malware-completeness

By rough analogy with Turing-completeness and NP-completeness, I think we have entered a malware-complete age. By this, I mean that it's pretty difficult for an individual to have confidence that their machine does not have malware.

Looking at all the NSA revelations, plus various other stories of state-sponsored hacking, it's clear that there are huge resources available. If a large government wants you hacked, you'll be hacked. They will have known exploits for pretty much all the software you use - look at how long Heartbleed was about, for example, to get an idea of what they probably know about.

Computer viruses really entered the public awareness in the early '90s, and the Morris worm was in '88. Reflections on Trusting Trust was '83, and I'm sure there's plenty more before that, so there's plenty of history to malware, but I think it's only recently that it's become impossible to reasonably harden a general-purpose computer.

The thing is that there's now so huge an attack surface. If you're got gigabytes of RAM and terabytes of disk, it's easy to hide things. You can have user-mode spyware, root kits, things that hide in virtualisation. You can probably do fun things at the microcode level, like a malicious version of the F00F bug fix. There's the BIOS. Everything will have a microcontroller and firmware in, keyboards, mice, SD memory cards, hard disks. All able to be subverted. Your internet router probably has a back door.

I was thinking to myself, 'What is the most useful computer I could feel confident working with, to feel my data would be properly safe, against government-style foes?'. Assuming they can't access my stuff physically (ha ha), and ignoring Tempest, etc., it would probably be a late '90s PC with an open source operating system, disconnected from networking and all external storage I/O disabled. And you know what? I still wouldn't trust it.

Posted 2015-02-19.


ToyTCP, a toy TCP/IP stack (well, not even that, yet)

I am reading through TCP/IP Illustrated (review to follow, obviously!). "Doing" seems to be rather more effective than just "Reading" for learning stuff, so I thought I'd write a little user-space TCP/IP stack.

I have a tendency to have many projects on the go at once and switch between them, so I have now hit such a switching point, having created the absolute minimum program that does something, namely negotiate the LCP level of PPP, badly. I now have code up at github. I'll do a little bit of something else, and come back to it, hopefully bringing it to the level where it can actually receive and drop an IP packet!

Posted 2015-02-10.


GCSE physics lies: Rainbows and total internal reflection

When I said I was done with ray-tracing, I was lying.

I've not done any more coding, but I thought I'd just write down something that I only realised while doing the ray-tracing work.

The story you get at GCSE-level physics about rainbows is that they're caused by total internal reflection causing the spherical raindrops to act like mirrors, and the angle of the arc is thus related to the critical angle, etc. The refractive indices are supposed to be slightly different for the different frequencies, hence the colours.

This simplistic answer is rubbish. If you take the path the light takes once it refracts into the sphere, you can use symmetry to show that the angle between the entry point and its normal, and the exit point and its normal is the same. Therefore, the external angles will be the same, and you will certainly never get TIR under this simplistic model. A physicist friend suggests rainbows are really about Fresnel reflection, but I don't know anything about that.

I noticed this behaviour when writing the ray-tracer, as I'd left in a "TO DO" case for dealing with TIR, and noticed it never got called when doing my refracting spheres demo. I thought a little bit, and had an epiphany. It's taken me nearly 20 years to realise that the GCSE explanation couldn't work!

Posted 2015-01-04.


Electronics for Newbies: An update on Dirac

Another year since my last update on my Z80-based single board computer. I've really not worked on it. The monitor I wrote was unreliable at reading, which made loading programs incredibly frustrating, so I kind of wandered off from it. Since then, I've got a new job, settled down into it, and started on electronics again by building a PCB to display a VGA-like signal, and by scraping the contents of a floppy disk. All psyched up and ready to attack the SBC again!

First up, it's been named Dirac, and has a github repo. The name's not particularly important, it just needed a better handle than "That Z80-based computer". Second, I finally decided to attack the unreliable serial input.

The SIO docs say that for async (RS232-like) input, if you're using the 1x serial clock, it should be synchronised to the input. I wasn't doing this. This seemed like a likely culprit. I needed to use the 16x clock mode on the SIO... but this required me to generate a 16x higher frequency clock from the CTC.

I was using the CTC in "timer mode", which prescales the counts by 16x or 256x. I was using 16x mode. In short, this prevented me from running the CTC signal fast enough for it to be 16x a sensible bit-rate for the SIO. I really wanted to run it without prescaling, but timer mode doesn't allow you to run without prescaling.

On the other hand, "counter mode" does allow you to run without prescaling, but it counts ticks on a trigger input, not clock cycles. However, if I feed the CPU clock into the (previously unused) trigger inputs, they behave rather like "timer mode" without prescale, which is pretty much what I want. Almost like they designed it like that, ahem. It would have been nice to have it as a suggested design, I guess.

So, I feed the CPU clock to the trigger lines, switch the CTC config from timer mode to counter mode, and switch the SIO from 1x to 16x clock, and everything's good and it reads serial data reliably! I make this sound straightforward, but there was plenty of faff involved.

But... not quite. Somewhere along the way, by switching CTC source, the tick rate has changed by a factor of 2, and the thing runs at 4800bps, rather than 9600. Unfortunately, the timer constant for setting up the CTC is a small odd number, so if I try to adjust it to bring the serial I/O back to 9600bps, it'll be waaaay off. I guess I'll just live with reliable 4800bps comms.

I've been reading up on CP/M, and at this point a minimal CP/M with RAM/ROM-disk is starting to look pretty plausible...

Posted 2015-01-03.


Electronics for Newbies: The software

Argh. Looks like it's been a year since my last update on my z80-based toy computer. Kind of tragic.

A number of things got in my way. I finally had a machine upgrade around last April, and my new machine had no parallel port for the EEPROM programmer. So, I bought a parallel port PCI card. Making the programmer software recognise the card on 64-bit Windows was a complete nightmare. Only after much, much faff did I manage to get it to twiddle the line levels as I expected.

However, it wouldn't program the chip. Probing the lines showed waveforms as I expected. Tracing the schematic through and checking, everything looked ok. I added some temporary pull-ups to see if it helped. It did not. My final guess is that there's just some marginal timing somewhere, and the PCI card and/or driver and/or whatever is just a bit naff and doesn't get it quite right.

In the end, I bought myself a Minipro TL866A USB-based EEPROM programmer. It cost 50 quid, but it's absolutely awesome, and just works, even on 64-bit Windows. Yet another decent recommendation by EEVBlog. (To be honest, the Willem EPROM programmer I'd been using previous had always been a bit of a pain. The fact the power cable is a USB cable with USB type A plugs on both ends is a bit of a hint that it might be cutting a few corners. Super-cheap, and you get what you pay for.)

So, I could now restart development, as I had a way of getting my programs back onto EEPROM. In the past, I'd been assembling the files on my laptop, and then sshing them over for programming. I thought life might be easier if I had the assembler running directly on Windows. My assembler of choice is kio's "zasm". I'm not sure this is a particularly good choice, given it appears to be a bit eccentric, but there you go. The source targets Unix, but I ported it to Windows. This was much harder than it needed to be, as it uses kio's utility libraries, which assume a whole pile of unixisms, despite the fact that the assembler doesn't really use that functionality. Hack, hack, hack and I finally have something that will assemble files.

Then I tried to write a simple monitor, by modifying some existing code. For some reason, it just wasn't playing ball. It would write individual characters to the serial port, but my incredibly simple string-writing routine kept being off by one character, when it wasn't actually just plain flaking out. After much hacking, I did get it print a string. The "\n\0" in the string constant was rendered literally, as I didn't realise zasm didn't handle C-style escapes, so I replaced it with a "$0a, $00" in the assembly. That worked better, except... my routine was printing a CRLF pair. Hang on... they're both in the ROM image, but not in the source. Further inspection reveals that all "$0a"s (including those in opcodes or operands) become "$0d0a", and this is why my string constant addresses are off, when the whole thing's working at all.

Stupid Windows, stupid zasm. zasm calls "fopen" with "r" and "w", rather than "rb" and "wb". This makes no differences on Unix, but on Windows it still controls line ending translation. And I certainly want no line ending translation on my binary files, thanks. A dumb thing to miss on my porting effort, and incredibly frustrating, as I'd been assuming my bug was somewhere in my assembly source, not the assembler. *sigh*

In any case, I can now make a little progress on my toy ROM monitor, and apparently have a working toolchain.

Posted 2015-01-03.


Ray-tracing fun: Transparency and refraction

So, I've been doing all the distributed ray-tracing effects, and this isn't one of them. It is a traditional ray-tracing effect, though. Ray-tracing's best claim to fame is fancy reflection and refraction. Turns out I don't like this one particularly. Refraction is actually a fairly unsubtle effect, and I can see why crystal balls were used for gazing into - it's almost impossible to see what's going on. If you render a sphere, it's basically a big lens, and setting a scene up to render so that you can properly see what's going on with the effect is a nightmare.

Nonetheless, I've had a go. I think it more-or-less works, but basically I don't care if there are bugs in it, as I've got an image out of it and I'm done with it.

(The image shows various different refractive indices - I quite like the one on the right, actually.)

One thing I didn't expect to be thinking about was the attenuation of light. I'm demonstrating transparent, coloured materials. The logical basic model is exponential attenuation. However, this reveals the problems in a basic RGB light model. Let's say I create a material that almost immediately attenuates blue, doesn't touch red, and mildly attenuates green. In this case, a thin piece of material looks yellow, and a thick piece looks red. In other words, the colour qualities change with the thickness of the material. So far, so nice - real materials can behave like this.

However, under this model, asymptotically the material can only become red, green or blue (or cyan, magenta, yellow or white, if the components are precisely balanced). The separation of selectively attenuating frequencies from a spectrum and then projecting into RGB space really does not work the same as applying the function component-wise. I vaguely remember reading something about this kind of stuff in Foley and van Dam, but this is actually the first time I've actually really dealt with it...

In any case, I think I'm done with all the basic ray-tracing effects now. Hurrah.

Posted 2015-01-01.


Newer entries Older entries