Ugh. I've always hated the word 'blog'. In any case, this is a chronologically ordered selection of my ramblings.
Before I start, I should emphasise that I'm generating VGA voltage levels, to a VGA signal pattern, over a VGA connector, but not using a standard VGA resolution or pixel clock. Fortunately, I've had a monitor capable of multisync since 1995, so I don't see this as being much of a problem.
Following up from my post on VGA signals, I decided to try to make the very dumbest "VGA" generator possible. The idea is to hook up a counter to an EEPROM/Flash memory, and then hook the data pins up to the VGA connector. The sync and video signals would all just be directly recorded in the data.
The basic circuit was straightforward, and whipped up on a breadboard. I used a 2MHz oscillator, as that's what I had around, and it was a nice, low frequency to send signals around a breadboard with. The downside is that each on-screen "pixel" would be rather wide, but I'd live with it. This fed into 3 daisy-chained 74HC590s. I discovered that while chaining the RCO (ripple carry out) into CE (count enable) worked for connecting the first and second stage, it didn't work between the second and third stage (go work out why :). Instead I reconfigured it, as suggested in the data sheet, to feed the RCO of the second stage into the CPC (counter clock input) of the next stage. I now had a 24-bit counter to address whatever memory I put in. It appears ripple delay wasn't a problem.
For memory, I selected a 29040 512KB Flash memory. I'm sure there's a tiny processor in there, so my VGA display is not technically CPU-free, but, well, we could pretend it's just a ROM. I'd been using EEPROMs in the past, but I'd noticed that Flash memories are bigger and cheaper, so, well, that's what I used.
I cobbled up a video mode whose total pixels-per-frame neatly fits into the memory (actually, fitting neatly into 32KB, but I repeated it throughout the memory's address space - I guess I could have created an animation), and wrote a little Lua script to generate a memory image to generate the signal.
The 8 data bits of the memory then drive the lines of the VGA connector. H- and V-sync are driven directly by the TTL-compatible output of the memory. The remaining 6 pins give us 2 bits of red, green and blue. 64 colours is way more than I ever had on my ZX Spectrum. To get these colours I put in a resistor in series with each of the pins producing colour, so that when both pins are on, with a voltage divider with the 75 Ohms at the monitor end of the cable, the TTL signals become a 0.7V VGA-like signal.
And... the end result works! Here it is, with just 3 bits of colour (I was a little impatient!):
I wired up the other 3 bits, and, well, got the colours:
You can see the invidual horizontal pixels which, as predicted were rather wide (16 times as wide as if we'd run off a VGA-like pixel clock). However - there's a big vertical stripe down at the end of each pixel. What's that about? When I dig the oscilloscope out, I see big piles of transients between the pixels where the memory output is changing. If I put an octal latch on the output of the memory, clocked on the oscillator to make the signal cleaner, the stripe goes away:
I'm a Bad Person as the latch I have triggers on the same edge as the address changes on, so I'm probably breaking set-up and hold times, but the chip doesn't mind - if I pop an inverter on the clock signal, it makes no difference to the video image.
I have quite a few plans from here. I'd like to try to up the frequency of the oscillator, to get higher resolution, and I'd like to use the high bits of the oscillator to trigger the high bits of the memory, to generate animation. Oh, and I plan to get the code and schematic documented, and get the thing up on github. Fun, fun, fun!
I am perhaps the last person to composite mod their Spectrum. I dug it out to demonstrate 80s computing to the children (much fun was had by all), and while our TV does take [UV]HF in, it's probably going to be the last one that does. So, I gritted my teeth, committed sacrilege on the machine I grew up with, and did the mod.
Of course, I did the reversible version, unsoldering leads rather than cutting them. Even then, I was extremely twitchy about lifting a pad or whatever. Interestingly, though, no hardware change would have been necessary. As an experiment, I just tapped into the composite signal being fed into the modulator (with an alligator clip!), and... it works fine on the display. The modulator doesn't do anything sufficiently bad to make my display fail. The business of disconnecting the existing modulator and routing the signal out through the TV out socket seems to be largely for convenience. And for me... convenience won in the end.
Of course, resurrecting the Spectrum didn't involve just twiddling the video output. After a couple of decades (at least) since last use, I had to replace the grotty old keyboard membrane. It turns out my muscle memory for the Spectrum keywords hasn't gone away, which is midldy scary. I also bought a SMART card to load memory images into the Spectrum, which is both cheap and effective. It's all pretty cool, although admittedly not as convenient as an emulator.
Having knitted a couple of scarves with plain patterns (garter stitch and stockinette), I thought I'd try out some other patterns, and see what there is to that. I've now knitted up a few swatches for the simplest patterns, as I work my way up. I'm trying to match the lengths on them, so that by the time I'm done I can stitch them together and create... yet another scarf!
In any case, here are the swatches I've created so far:
These are, clockwise, starting from the top left, 2x4 ribbing, basket weave, stockinette, garter stitch and seed stitch. So, there you go. Now I'm trying to work on some slightly more fancy patterns....
In 1995, I had a relatively brutal introduction to VGA signal timings. I wanted to set up XFree86, for which you had to specify the exact signal timings, and the documents were impressively confusing to a teenager, yet also full of dire warnings that GETTING THIS WRONG COULD DESTROY YOUR MONITOR. Somehow I got through it all and managed to get a working X11 set-up, largely by cargo-culting the examples. Kids these days (and myself) have it easy, etc.
Fast forward a decade, and I saw Eben Upton bit-banging a VGA signal using an AVR. This was his idea of Raspberry Pi at the time - a small, simple and understandable computer to teach kids to program. Like an early micro, the AVR could bit bang the screen as it was being rasterised, and do its computation in the vertical fly-back, and you could run an interpreted language on top of that. Of course, the final RPi was something incredibly different, and stunningly more powerful, but it was awesome to see it evolve from that.
Roll forward another decade (argh!). I've been interested in generating my own VGA signal for years - not in software, but as another hardware project, building my own graphics card. It's way down my projects list, but this evening I thought I'd make a little step on the road there - I wanted to both refresh my memory and make sure I fully understood how a VGA signal works. For the moment, I'm ignoring the analogue details - I want to get a handle on the timings.
There are a number of websites describing VGA signals, but I never really trust those timing diagrams, especially with all the elided data. So, I whipped out my oscilloscope, wired up a VGA connector, plugged it into my monitor cable, and got the actual waveform.
My plan is to describe the waveform algebraically. There are three sets of lines of interest - the RGB colour lines, horizontal sync and vertical sync. I'll create a few states based on what's being placed on those lines at a particular point in time:
All of the above represent the lines being held for one pixel clock time unit. From this, we can construct some signals. I'll use '* n' to represent a signal pattern being repeated n times, and '+' to mean signal concatenation.
One thing to note, which seems to get left out of some timing diagrams, is that the H syncs keep going through the vertical blanking period - it's just that there's no image data during that period. The vertical sync signal edges are sync'd to the rising edge of the horizontal sync signal.
The variables are as follows, including example values for 800*600 at 75Hz:
From this we can see there are 1056 pixel clocks per line, and 624 lines per frame, and at around 75 frames per second, this is 1056 * 624 * 75 = 49420800 pixels per second. I believe at 49.5 MHz clock is used.
I've finished reading Spacetime Physics, and have just a few more notes from the last section of the book...
Momenergy is an awesome concept! The combined momentum-energy 4-vector is very neat. The way that Newtonian energy is just a Taylor expansion term of the full relativistic time component of momenergy is surprising and wonderful. Mass (described elsewhere as rest mass) as an invariant quantity, like subjective/proper time is super-neat. Photons lack mass but still have energy and momentum, as the Lorentzian metric length is zero but the components are not - we have an answer to the question "what happens to the momentum of a particle, when you take the mass to zero and the speed to c?".
Basic particle physics interactions then become... almost obvious. The total sum of the 4 vectors in the system are preserved, and it all flows from there.
One of the nuttiest things is that mass, being the length of a 4 vector, doesn't sum (using the Taylor and Wheeler approach of mass as 'rest mass', not 'relativistic mass'). Unsurprisingly, Taylor and Wheeler avoid trying to build concepts like forces and centre of mass on top of this.
A thing that confused me for a while is that a change of reference frame produces Doppler shift on photons, changing their energy. But... their mass and speed stay the same across all reference frames. What's going on? So, the mass stays the same (0), and speed stays the same (c). With a change of reference frame, both energy and momentum change by the same amount, so that mass remains the same, and it's all fine.
So, I've done special relativity. The last chapter of the book concerns general relativity, and it really leaves me with more questions than answers. Why gravity in particular is a special force, and exactly how curving spacetime is fundamentally different from a field-based approach are not clear to me. On to the general relativity texts, I guess.
Warning: I'm so depressingly out of practice at mathematical things that I could well have stupid mistakes in the following. Sorry! If you spot anything wrong, please mail and correct me...
Some more notes on special relativity as I traverse Taylor and Wheeler's Spacetime physics.
Subjective travel times
So, I like hyperbolic rotation stuff, because it makes addition of change of velocity behave in a nice additive fashion. We have the relativistic velocity as tanh(zeta), where zeta is the rapidity. I like rapidity because it's additive and at low speeds is the same as velocity. From the point of view of a person accelerating to a relativistic speed, they can treat the acceleration as a large number of small changes of velocity, and the rapidity is the velocity they would expect to have in a Newtonian universe.
In a Newtonian universe, travelling distance d would take time d / zeta. Subjectively, in a relativistic universe, things would move past you at velocity v = tanh(zeta). On the other hand, Lorentz contraction would make the distance d' = d / cosh(zeta). Subjectively it would take time d' / v = d / (cosh(zeta) * tanh(zeta)) = d / sinh(zeta) < d / zeta.
In other words, if you're going somewhere far away, subjectively it takes less time than in a Newtonian universe!
Interpretation of interval
The interval, t^2 - x^2, is a nice invariant, but what's its physical interpretation? The "straight line path" is the route that maximises the interval, and (for time-like paths) you can find a frame in which the x movement is 0 zero, in which case the interval is just the square of the most time you can experience going between the points in space-time. Which is also known as proper time.
About this point, I realise this is all covered about 4 pages ahead of where I got to in Spacetime Physics.
Change of velocity as shear
In Newtonian mechanics, a change of space or time reference point is a translation, and a change of velocity is a shear. In special relativity in its normal representation, a change of reference point is a translation, but change of velocity is nothing like a shear. Can we find a way of reformulating things so that it is a shear, at the cost of making a change of reference point into something quite different?
Yes we can! First of all, we make the y-axis into sqrt(t^2 - x^2). As each line of constant y now represents a particular interval, the y-axis value is unmodified by a change of velocity, as required for a shear. By using a square root, the y-axis is still a time axis for x = 0.
Then we want the x-axis to behave like a shear during change of velocity, with it being affected by a translation, proportional to the distance up the y-axis. A thing that behaves additively under change of velocity is rapidity, and moreover each point along the x-axis for a fixed value of the y-axis (interval) represents a specific rapidity required to reach that point in spacetime. So, if we change the x-axis to sinh^-1(x), we get the behaviour we want!
Admittedly the resulting representation of spacetime doesn't behave nicely under translation or have other properties we'd like, but still!
Uniform acceleration doesn't lead to uniform time
One of the things I found fairly unexpected is that, given two identically accelerated objects, positioned at different points in the direction of acceleration, they will age differently. This carries through to general relativity, so that objects in a uniform gravitational field age differently. In other words, you can tell that you're undergoing uniform acceleration, and it's not equivalent to genuine free-float, which I find utterly unexpected.
Time in the rest frame as action points
As I have little intuition about the Lorentz metric, I've been playing around in order to get a feel for it. This motivated the "relativistic change of velocity as shear" thing above. It's very tempting to try to reframe the Lorentzian metric as a normal Euclidean one. So, one way to look at it is to track the motion of a set of particles in a particular reference frame. Each particle, in this reference frame, can move some distance x and accumulate some proper time sqrt(t^2 - x^2), for a given amount of time in the reference frame, t.
In other words, a particle, measured against a reference frame, can "choose" to spend that reference frame time on moving or experiencing the passage of time. In the parlance of turn-based strategy games, time in the reference frame is actions points that can be spent on movement (in the reference frame) or action (subjective experience of the passage of time), albeit with a Euclidean l2 norm, rather than the l1 norm of those games (sum up movements plus actions).
Put another way, reference frame time is path length of a path representing travel through a Euclidean metric spacetime of reference frame space and subjective time. This does not look particularly helpful, since you are unlikely to want to match subjective times together, but I thought it a nicely different way to look at things.
Warning: I'm so depressingly out of practice at mathematical things that I could well have stupid mistakes in the following. Sorry! If you spot anything wrong, please mail and correct me...
So, I'm slowly working through Taylor and Wheeler's Spacetime Physics, which is a reasonable pile of fun. I reached the halfway point during my gardening leave, and have basically been on hold while I've been learning the new job. However, I thought it worth starting to write up some notes on the things I've been thinking about while learning special relativity.
I've got some more complicated bits to cover (even before I get onto the second half of the book), but that'll do for now.
It's been 3 years since my last knitting project, and I recently bought a book on knitting, so it seemed time to create a new project. Plus we had this weird multi-coloured wool lying about the place.
Last time, I knitted and only knitted. I never learnt to purl. This meant I ended up producing garter stitch, which is frankly not as pretty as the stockinette everyone's used to. This time, I learnt to purl!
(The book recommends using plain wool when learning, but this multi-coloured stuff's pretty good, since the colour changes make it all a bit clearer as to what's going on.)
Actually, I tried to do a bunch of fancy stitches, and tended to end up with accidental yarn-overs when switching between knit and purl within a row, so I just fell back on doing a bunch of stockinette. Caroline then suggested making mini circular scarfs for the kids (or "snoods", although I can't face the term), so that's what I did.
Once I got the hang of stockinette I put in a bit of switching over the stitches in order to create initials on the scarves to distinguish them. The first one I just tried creating a region of inverted stockinette, but that didn't work so well - the way it tends to curl in opposite directions at the sides and ends meant that some bits bumped up, and some down. For the second one, I tried doing the initialed area in garter stitch. It more or less worked. Hey ho.
End of step one, it looks liked this:
You can see they look a little, err, uneven. Rather like my first scarf. Apparently the secret is "blocking", which is a fancy term for pinning it out and getting it wet and making it dry in the right shape. Doing this properly involves threading stuff around the edge to to get a nice straight edge, but I did a lazy version:
I think they came out ok. You can see a certain amount of twisting up around the edge, but that's just what stockinette does, apparently. If I don't want that I have to put some fancy edging on. I'll live.
Finally, I stitched up the edge to create the loop, and I have a couple of home-made if slightly naff Christmas presents! If Daddy can knit, perhaps this will convince the children that women can be astronauts and doctors....
I was in the mood for another Magnetic Scrolls adventure, and had tried to get into this a few times before without much success, so I had another go at Corruption. I did make a fair amount of progress, before trying to work out if the hospital section was a red herring or not, and finishing off the game with a walkthrough.
As you might guess from that, the game did not utterly enthrall me, even if it was reasonably fun! It's actually a rather small game, compared to Guild of Thieves or Jinxster. The trickiness is in how the game is heavily time-oriented, with certain events happening at certain times. You need to be at the right place at the right time to collect all the evidence you need to show your innocence.
As such, it's actually much more of a scavenger hunt text adventure than it initially looks. On the other hand, the way to find out what you need to do, when, seems to largely consist of following people around, or hanging around seeing what happens at particular times, in order to construct the correct walk-through. In other words, to complete the game, you must make notes from other runs, and then make the character act like a psychic. I'm not a great fan of this approach.
Finding the "treasure" was in a few cases counter-intuitive, as you must not only work something out, but make the evidence obvious to your player character. In other cases, it was simply a bit obscure.
There's a fair amount of filler in what is even then a fairly small game. Irrelevant locations abound, as do red herring NPCs. Compared to GoT and Jinxster, it's a real disappointment. It's very much not a game I'd want to try to complete without a walkthrough, but it's reasonably fun for a quick explore with hints.
So, some time ago, I did my Head Over Heels cross-stitch, but I never found a nice way of presenting it and protecting it from the elements. No standard picture frame would fit. And now I have a frame!
Work has a "Makers Lab", and my original plan was to use the little CNC milling machine to make the appropriate shape (a hole on the front and a recess on the back). Fortunately, an experienced guy turned up, and advised me it would be slow and a relatively steep learning curve to do that (I should start off with playing about with other stuff using foam first, to get experience). Instead, I was pointed back to the laser cutter.
I was already going to use the laser cutter to cut the window, from some spare 3mm transparent acrylic. So, I broke the rest of the design into a set of 3mm layers, and laser cut some 3mm ply for the frame shape plus a stand. I stuck it all together with copydex (probably the wrong adhesive!), and this is the result.
I really am pretty happy with it. I considered painting it, but Caroline rather liked the raw laser-cut look. This now sits happily on my desk at work, where no-one has expressed the slightest interest. There we go.
I've always wanted a set of Penrose tiles, and now I've made some! I wrote a small Haskell program to generate a regular tessellation of kites and darts (not a Penrose tiling) as an SVG, and then used the laser-cutter at work to cut the shapes out of 3mm perspex. I actually cut out a full blue set (kites and darts) and a full yellow set (kites and darts), and mixed them, so I now have two sets! One for home, and one for work.
I learnt a couple of things: 1. The ratio of kite and dart pieces in use tends to the golden ratio. I've cut out the pieces in a 1:1 ratio, and end with some spares. 2. Putting the pieces into a Penrose tiling is not quite as easy as you'd think. 3. But it's kinda addictive! I'm most tempted to cut out a pile more in order to be able to make bigger patterns.
I've finally finished the model aeroplane that I posted about previously. Actually, I finished it a while ago, but I've only now got the energy back to post!
It turns out that the later and later stages of model aeroplane construction are less and less rewarding, at least for me! Gluing wood into shape is fun, covering with paper is fiddly, and decoration is tedious and when I do it, it produces underwhelming results. Still, it is done.
It's been undercoated, painted, details drawn on, decals applied, varnished and final features added. Final features first: I tacked on the antenna as a somewhat ad hoc extra, and the cockpit was made from painted balsa strips, since the original plastic cockpit somehow was crushed. Varnishing was the same as painting.
Decals were just like the "soak in water and slide onto surfaces" transfers you'd get in cereal packs in the '80s. They still have a tendency to tear, and they don't look great, but they're ok. I drew the edges of control surfaces on with a pen and ruler, which just reveals how poor I am with a pen and ruler. And then the lines ran a bit when water from the decals got on them. I guess the lesson is to do decals before lines.
The main faff was the painting. Rather sillily, I did it with an airbrush. I wanted the best finish I could manage with my limited skills, and, hey, I got to play with a new tool and attempt to learn a skill and understand the technology. Airbrushing is pretty fun, but I'm... not skilled with it. I got a cheapy airbrush, which wouldn't help, but it's easy to get it to clog up, and condensation can make it end up somewhat sputtery.
Still, it's done now!
I've finally finished the DIY laptop case project that I'd been working on, and moreover had the time to recover from it.
The plan was to create a fabric case for my ancient Macbook Air, with the external cover using pinstripe fabric at an angle, and the inside using a patchwork of Clarissa Hulse fabric.
Overall, I'm pretty happy about the result, although I'll admit the quality of work's not great:
After comparing a few alternatives, the design I used came from here. I decided on a plastic zip so as not to damage the notebook, and learnt far too much about the varieties of zip design before promptly forgetting it all. I decided to go for a nice chunky zip in bright orange as a bit of a contrast to quiet pin-stripe.
Anyway, I was never going to produce a result as good as someone with proper sewing experience, and as I'm making a one-off the lessons I learnt are somewhat lost, but I really wanted to create my design myself, and, well, it was an interesting learning experience. :)
Here are some things I learnt:
Despite all that, it was a fun project.
They try hard to make Go not hard. They leave out hard things, and say to do things in a not hard way. They have strong views and are very clear as to how to write good things in Go.
It has strict types as weak types are bad, but poor strict types as good strict types are too hard. So, I can't make my own safe map that holds a type A or a type B. But Go has maps built in. Could this be why? Poor types make me sad.
When I write Go, it feels like all short words of one sound. And they say this is like how to speak to a small child. And I shake my head, I give up.
Writing Go feels like writing using only monolsyllabic words because somebody heard that you should make things easy to explain to a child, and short words are easy. Of course, you don't speak to a child with monosyllables, you speak simply and clearly.
I read a paper once that tried to explain the problems with Kelly criterion, using words of one syllable. It was doing that to be condescending, and unsurprisingly it just made the arguments more obscure. Go feels like that to me.
Fundamentally, Go is retro-futuristic. If you want a language in the footsteps of C and Awk, if you were clobbered with C++, Java and Python and wanted the future Bell Labs promised you, this is is the language for you.
However, the changes don't really progress beyond the '80s. The way of representing types is perhaps an improvement on C, but is just stupidly weak beer compared to, say, what Haskell gets up to. Knowing what's immutable through constness or values just assigned once is great. Go does not have that. It has pointers. Pointers with "nil" values allowed. As mentioned above, no sane generics.
The interface model is insane but fun. It's not what I'd put in a strictly-typed language. Pointers allow out and in-out parameters, but you can also return tuples... but there are no tuple types.
Mind you, tuple types might encourage you to factor out common functionality, such as error-handling. All error-handling should be as long-winded and tedious as possible, as otherwise you're not doing it right. (In a similar vein, you should be writing full explicit error messages for when your unit tests fail. Having a compact way of expressing your invariants is not the proper grind.) Exceptions are bad, because people don't use them properly. Of course, in C people always checked returns codes, so I can see why they have returned to this approach.
Except, of course, that they do have exceptions, wrapped up under a different name and twisted up so that you're not tempted to use them. I do rather like "defer", though, which is almost as good as actually having RAII or some other "with" structure.
The channel-based communications mechanisms and go-routines are nice. Conceptually not new to users of Occam or Erlang.
Everything I see in Go, I see condescending design choices made by someone with confirmation bias. They've cleared away all that is bad in modern languages, returning to and enhancing those things that are good. And ignoring all the things that are clever and different.
In some ways, the name says so much. A clear lesson from everything else is "Choose a name that is easily googled". However, this knowledge comes from after 1990, and the thing's called "Go".
What would I use instead? I'm a bit of a Haskell nut, despite its deficiencies. I expect ocaml is pretty good, but need to look into it. I'm getting increasingly interested in Rust.
To be perfectly honest, it does well as a deliberately mediocre language. It's probably quite good as a first language - better than Python at any rate. In the end, what grinds me down about it can be described in two words: parochial condescension.
Lego is not quite Trigger's Broom, or rather the name is, but the toy itself is not. Lego Mindstorms shares the name with the original toy, but is basically incompatible, having been incrementally redesigned over the years.
So, I've been out of Lego for well, most of twenty years - a generation - and am getting back into it with my children. I received a fantastic Lego Mindstorms set as my leaving present from my last job, and decided to build it with them, which meant waiting for the Summer holidays. The holidays have arrived, and we're now playing with it.
It took a while to realise it, but, well, look at the pieces:
The top piece is traditional old-school Lego. The bumps on top allow you to stack pieces. Good in compression, poor in tension. The second row is Lego Technic from my childhood - the holes can hold axles, or rivet-like pieces to bind the bricks together, like a spanner-less Meccano. The third row is Lego Mindstorms - it's got the holes, but it's got no bumps. You can't directly use old-school Lego with Mindstorms!
I can kind of understand why they did this - the traditional Lego connections, being poor in tension, are useless for the kinds of applications Mindstorms are for. On the other hand, it seems really weird to have a Lego-branded product that, well, doesn't plug with Lego.
A friend said I should paint the 3D-printed Weighted Companion Cube, so I did. I had the paints left over from a model kit from my childhood (see previous post), but I'd never really used them before, especially on something small. For some inexplicable reason I hadn't spent my early adolescence painting tiny model orcs, so the painting was something of a lesson for me. I learnt:
Without further ado, the fruits of this labour are below. To be honest, I reckon it looks a bit better in real life. :)
Recently, I've been constructing things. A few months ago, between jobs, I popped down to where I grew up to collect the last of my stuff. It's been some time! However, I didn't really want to pull things out until we'd bought a place, to avoid moving everything time and again. Then, when we'd bought a house, time for this kind of thing was incredibly limited. Anyway, the stuff's finally here.
Amongst all this stuff was a balsa-wood model aeroplane that I was constructing with my father, a project that must have stalled in the early '90s, if not before. The fuselage had been glued up, but that was it. Being something of a completionist, I worked through the rest, and it's now ready to paint!
It's not a brilliant job, but it makes me happy. Things I discovered include:
"A 3D printer?" you say? Why, conveniently work has a 3D printer for our use, which looks like an extremely fun toy. It has the advantage that you can set it off, go do some proper work, and then come back when the print is done.
My first test was a Weighted Companion Cube, because I'm a fan of Portal. My first attempt went wrong when I failed to click the "add supports" button, leading to the printer attempting to doodle into space. One quick cancel later, and a reslice, it was off. With a well set-up high-end consumer printer, printing a small part, it seems that the whole thing's near idiot-proof.
This is the result. It's about an inch cubed:
To my eyes, the quality is very good. You can see the printing artefacts, but they're relatively small. I am extremely tempted to try the acetone vapour smoothing trick, and see how it goes.
Finally, the solution! This stage involves some moves that are relatively tricky to explain, and if I were living on the cutting edge of ten years ago, I'd just attach a video of the moves. As it is, I won't, and instead I'll find someone else's description.
As it turns out, after exploring the complete space of 2x6 configurations, the only move required to get to a state from which we can solve the Master Edition is a "-X". This takes us from the initial back view:
To a configuration where you can see the core of the solved puzzles:
From here, you can perform a twist very similar to that used when solving the small magic, to move to an "L" shape:
Finally, you can do two of the "bend the corner round" moves that you use to finiah the small magic, to bring it into its final configuration:
I found the website in the links after writing everything else up, when I was looking for an easier explanation of the more complicated moves. Interestingly enough, the moves described there are basically the same set, plus a "row swap transform", which it describes as also implementable with X and O. So, it appears my analysis is either "unoriginal" or "correct", depending on how you look at it.
In this post, I'm going to look at moving between the possible 6x2 flat configurations. For everything here, I'm assuming that you're viewing the Magic joined-up-rings-(starting position)-side-up. As the pattern's a bit distracting, I'll be using a schematic view of the Magic:
The numbers represent the position of squares initially, and the dots represent a particular edge. Initially, I'll place the dots on the outside, have the top-left square be square number 1. Note that the 3 highlighted squares in the top left more than define the position of everything else (see my previous post for details).
We're going to see what positions are reachable given combinations of three basic moves. The first, I'll call "X", after the way the squares windmill, and it's simply achieved by folding the Magic up, folding one square down, and another up:
X simply moves all the pieces around one step, without any rotation or fancy changes:
The next move, I'm going to call "O", since we make the Magic into a loop. Just fold the Magic in half, then slink it one square along. Note that it will then open up the other way:
So, this move can be used to change the orientation of the pieces:
Note that so far, we can only rotate a piece by 180 degrees. Next move, I'll call "S" for "square", as we do a move that makes it into a square, then unfold it again (90 degrees around) to end up in a different shape:
This move rearranges stuff quite a bit, and includes a quarter turn, which should allow us to reach more configurations:
So, given these basic moves, can we create all configurations? Well, we can if we can move any piece number to the top left, rotate it arbitrarily and switch between clockwise and anti-clockwise numbering...
Given this, we can make any 6x12 configuration, so as long as we can go from a 6x12 to the final "W" shape, we can solve the puzzle. And that's the next post.
This is old, old news to everyone but me. Growing up, I loved my Rubik's Magic, but I never played with the Master Edition. Moreover, I never tried to analyse it mathematically. So, I decided to finally play the thing, and try to understand it.
Solving it wasn't difficult, but getting around to writing it up has been a right pain. :) I thought I'd start with something very simple: Counting the number of configurations. Specifically, the number of 2x6 flat, rectangular configurations (like the starting position).
First a couple of fairly obvious constraints: Whenever the puzzle is laid flat, it's always with the same set of pieces facing up - there are two "sides", and pieces don't move between the sides. Moreover, each piece is always connected to the same other two pieces - the sequence of pieces in the loop is always the same.
So, we lay it flat in front of us. We can choose either side to face up. There are twelve possible pieces that can go in the top left. This piece can be one of four rotations. That fully describes the configuration of the top-left piece.
What about everything else? The neighbouring pieces will always be the same, due to the limitations of the "loop". However, this loop could be running clockwise or anti-clockwise. However, once that's determined, everything else is determined. The orientation of all the other pieces are full determined by the orientation of the top-left piece - as it rotates, all the others rotate, like gears.
This gives us 12 x 4 x 2 = 96 configurations. However, the puzzle itself has a rotational symmetry of a half-turn, so really that's just 48 configurations. This is far fewer than I was expecting! By similar logic, there should be 32 configurations for the vanilla magic.
This assumes, of course, that all the configurations are reachable - this is an upper bound, but perhaps some configurations can't actually be achieved. As it turns out, all the configurations are reachable, and that's what I plan to demonstrate next time...
So, I've been two weeks at Google, and one side effect I hadn't really noticed before is that my own home projects are getting somewhat sidelined by the fire hose of learning new stuff at work. Ho hum.
On the other hand, I've got a fancy new work phone.
It's a bit bigger than the last phone I had.
On the other hand, the technology's moved on tremendously, and it's really nice to be browsing the web on a mobile without feeling like a second-class citizen. The downside is moving over to the Android ecosystem is faff. The apps are ok, but moving my music and photos over is a pain. Still, slowly getting there...
One week of Google, and it really is as people expect. I've learnt a whole pile of cool stuff, but as described in How Google Works, there is much internal transparency, but leaks are taken extremely seriously. So, I'm not going to reveal pretty much anything. Oh, and memegen is as silly as the book makes out.
5/5. Would be recruited by again.
I may have finally written up the papers I read some time ago, but I still have a number of papers from that pile, plus a couple more recently-added ones. Time to discuss them:
Around, er, two years ago, I reviewed a bunch of papers, and noted some other topics I should learn more about. I read up on some of those topics shortly afterwards, but never actually put together a round of paper reviews covering highlights of what I'd read since then. This is it. There are a lot fewer papers than I should have read, but on the other hand, there's a lot of cool stuff that's just readable as web pages now, so I almost feel less embarassed about it...
I'd previously read about GFS, Bigtable etc., as well as Dynamo and a few others, but there were plenty more Google papers to read. Bear in mind I read these a couple of years before my Google start date:
A Robin Hood Tax is a tiny 0.05% tax on transactions in the financial sector. This could raise 20 BILLION GBP a year. RT if you support this.
To me, this illustrates perfectly why anything even vaguely subtle shouldn't be discussed on Twitter. Fortunately, the Wikipedia entry on Tobin taxes does have a lot of detail. It's a bit long. I thought I'd do my take on it, even though I don't have a Nobel prize in economics (yes, I know it's not a real Nobel anyway).
I've just left a job in the finance industry after ten years, so I think I have a reasonable understanding of it, without having a huge investment in its future.
First of all, I'm not sure how a tiny tax can raise mind-boggling amounts, unless because it's not a tiny tax, but actually a huge tax, pretending to be tiny. To put the twenty billion in perspective (using slightly old data), the UK's finance industry contributed sixty billion in taxes, and pays out fifteen billion in bonuses. In other words, this tax would more than take away all those evil banker bonuses everyone likes to complain about, and up the total tax take by about a third. This is not a subtle tweak, it's a pretty heavy bludgeon.
Of course, it wouldn't raise twenty billion, because unsurprisingly people's behaviour would adjust to take into account the extra costs. The possibilities I see are:
The "don't trade" angle is perhaps worth going into a bit more detail on, for those who don't know the area. When I was working on a precious metals gold trading system, doing a test trade, buying and selling a future for 100oz of gold, worth over $100,000, the bid-offer spread costs came to around $20. In other words, each transaction cost around 0.01% of the nominal value. FX is not exactly dissimilar. Taxing an extra 0.05% is going to strongly discourage trading in such situations.
Just because a transaction isn't necessary doesn't make it a bad idea. Banks like to hedge their transactions - they want to get rid of the risk associated with their positions, so that if prices move they don't end up losing big piles of money (yes, I know, they may not always be good at this). However, they don't hedge freely, because hedging costs money. If you up the transaction costs, hedging will cost more and it'll be done less. Banks will hedge less and take more risk. Or they'll keep hedging, but pass the costs on to their customers. It's not great, either way.
Finally, what's the point of this tax? I don't think it's really to raise money. It wouldn't raise anything like the suggested amount, due to the reasons above. It's also a very distorting tax, which will affect particular kinds of banking more than others.
Indeed, it's so distorting that it's either put together by an idiot, or it's intended to target a particular sector.
The sector that would be affected is the "flow" sector - the simplest, most straightforward products. The sale of complex derivatives such as the mortgage CDOs blamed for the credit crunch, which are much less liquid, with much bigger bid-offer spreads would not be affected by this rule. If it's a reaction to financial meltdown, it makes no sense.
Anyway, let's go with the "bad bankers" narrative, and assume that there are some people who need to have their lives made more difficult. Who is problematic in "flow"? I can see two sets of targets - bad traders and high-frequency traders.
The bad traders keep getting huge fines from the regulators. Fines and jail for wrong-doers seem the way to go here. Blanket taxing everyone involved is hardly an incentive for those who aren't crooked!
The other lot are high-frequency traders, who trade a large volume on tiny, tiny margins. They would be hit extremely hard. They're not actually bankers. They tend to be fairly small companies, operating with their own capital. The main objection is that by being faster than everyone else, they're taking their money away. However, they do seem to have reduced spreads and made the markets more liquid for small traders (i.e. retail customers). HFT basically takes money from the big players by doing what they traditionally did (have an information advantage over everyone else). So far, so meh.
What are the objections? It's socially useless? Candy Crush Saga is probably a bigger waste of time. Anyway. It's an unfair advantage? In the grand scheme of things, the resources required are not huge, and it's a very competitive area.
Let's say it's deemed bad. A Tobin tax is still not a clear winner. The big banks and institutional investors don't like HFT either. Surely they can find a way to deal with HFT? Why, yes! There are dark pools for large trades, exchanges with randomized timing to reduce latency advantages, etc. The problem with HFT may not be that they trade too much, but place too many (unfilled) orders, so you can cap the order to fill ratio. All these things can be fixed in a more targeted manner by adjusting the market mechanism. No tax needed.
I don't know, as I'm not sure what the aim is. If you want to deal with the problems of flow trading, there are better ways. If you want to deal with problems outside flow trading, there are ways that are actually relevant. If you want to raise more taxes, you can try to increase the tax rates in less distorting ways, or perhaps actually reduce the opportunities for tax avoidance, and clamp down on the "borderline" tax behaviour across the whole of big business. If you want to "fix" banking post-2008, you've got to balance what you want. Extracting large amounts of money is incompatible with making the banks build up large reserves. Getting the banks to lend is incompatible with reducing the risk they take. Taxing banks heavily may not actually be very sensible if you own large chunks of them. If you don't like bankers' pay, regulate it (there are guaranteed to be unintended consquences).
In all cases, "Let's have a transaction tax" sounds suspiciously like "We must do something, this is something, let's do it".