How to tree in Golang

Suppose I wanted to write some code that would let me represent functions as trees. For example, the functions -2 and x+y would be represented as these trees:

In Java, I'd write a top-level class that looks something like this:

public abstract class Node {
  private Node parent;
  private Node[] children;
  public abstract Object evaluate();
}

Then I could define the various nodes, such as:

public class Negate extends Node {
  public Object evaluate() {
    return -((Double) children[0].evaluate());
  }
}

public class Two extends Node { public Object evaluate() { return new Double(2.0); } }
public class Add extends Node { public Object evaluate() { return ((Double) children[0].evaluate()) + ((Double) children[1].evaluate()); } }

In Go, however, we don't have class hierarchies or abstract methods. Consider this first attempt:

type Node struct {
  Parent *Node
  Children []*Node
}

func (*Node) Evaluate() interface{} { return nil }
type Negate struct { *Node }
func (n *Negate) Evaluate() interface{} { return -n.Children[0].Evaluate().(float64) }
type Two struct { *Node }
func (*Two) Evaluate() interface{} { return 2.0 }

What I've done here is to embed a *Node inside a Negate. This is composition and not inheritance. It is really just a convenience that if Negate does not define a field or func, then if one of Negate's embedded structs defines it, that is what's used.

For example, the reference to the Children field in Negate's Evaluate func is a reference to the embedded *Node's Children field.

But because Children in a *Node is a []*Node, that means that Children[0] is a *Node, and thus Children[0].Evaluate() is a call to a func on *Node, not to anything else.

For example:

neg := &Negate{&Node{Children: make([]*Node, 1)}}
two := &Two{&Node{}}
neg.Children[0] = two

cannot use two (type *Two) as type *Node in assignment

This fails because two is not a *Node, it is a *Two. We can try to correct this:

neg.Children[0] = two.Node
neg.Evaluate()

panic: interface conversion: interface is nil, not float64

The panic is correct: within Negate's Evaluate func, Children[0].Evaluate() is a call to *Node's Evaluate func, which returns nil.

Conceptually, the pointer diagram looks like this:

The diagram makes it clear that Children always point to a Node. This is a direct consequence of embedding not being inheritance.

The basic problem is that when we call Evaluate, we need to call it on the original struct, not on the embedded struct. We can do this by using an interface:

type Evaluable interface {
    Evaluate() interface{}
}

type Node struct {
    Parent   Evaluable
    Children []Evaluable
}

type Negate struct {
    *Node
}

func (n *Negate) Evaluate() interface{} {
    return -n.Children[0].Evaluate().(float64)
}

type Two struct {
    *Node
}

func (n *Two) Evaluate() interface{} {
    return 2.0
}

func main() {
    neg := &Negate{&Node{Children: make([]Evaluable, 1)}}
    two := &Two{&Node{}}
    neg.Children[0] = two
    fmt.Printf("%v", neg.Evaluate().(float64))
}

-2

Run this code here.

So far, so good. Now, let's start adding common funcs to Node, such as one which tells you which child of a parent you are:

package main

import (
    "errors"
    "fmt"
)

type Evaluable interface {
    Evaluate() interface{}
    GetChildren() []Evaluable
}

type Node struct {
    Parent   Evaluable
    Children []Evaluable
}

func (n *Node) Evaluate() interface{} { return nil }
func (n *Node) GetChildren() []Evaluable { return n.Children }

func (child *Node) WhichChild() (int, error) {
    for i, c := range child.Parent.GetChildren() {
        if c == child {
            return i, nil
        }
    }
    return 0, errors.New("Not found")
}

type Negate struct {
    *Node
}

func (n *Negate) Evaluate() interface{} {
    return -n.Children[0].Evaluate().(float64)
}

type Two struct {
    *Node
}

func (n *Two) Evaluate() interface{} {
    return 2.0
}

func main() {
    neg := &Negate{&Node{Children: make([]Evaluable, 1)}}
    two := &Two{&Node{}}
    neg.Children[0] = two
    two.Parent = neg
    c, ok := two.WhichChild()
    fmt.Printf("%v %v\n", c, ok)
    fmt.Printf("%p %q\n", two, two)
    fmt.Printf("%p %q\n", neg, neg)
}

0 Not found
0x1040a130 &{%!q(*main.Node=&{0x1040a128 []})}
0x1040a128 &{%!q(*main.Node=&{<nil> [0x1040a130]})}

Run this code here.

I had to do a few things here. First, in order for us to get the Children of a Node's Parent, I had to give Evaluable a func that gets the Children.

Then I had to implement GetChildren and Evaluate in Node to make Node an Evaluable, so that inside WhichChild I could compare Evaluable to Evaluable.

But it doesn't seem to work! The pointers that I print at the end show that neg.Children[0] is equal to two, so what's going on?

Notice that we call two.WhichChild. But the receiver of WhichChild isn't a *Two, it's a *Node. Thus, we're actually comparing two.Node to two, which will never work. So what can we do?

We can fix this easily by making WhichChild a receiverless function, and giving it an Evaluable as an argument.

Setting up a Huanyang VFD for a CNC router spindle

I recently bought a CNCRouterParts Benchtop Pro. It is a 24"x24" router. The thing is, it's a kit. It comes in six boxes if you get the plug-and-play electronics package with it, and you have to put it together. But it doesn't come with everything you need.

  • You need a Windows computer to drive it (with Mach3), and of course a monitor, keyboard, and mouse to run it.
  • Then you need a raised surface to put the whole thing on -- you're not going to leave this on your garage floor. I got a shelving unit from McMaster (48x36 2-shelf), then I bought some wheels off of eBay -- because everything in a workshop must be on wheels. The "feet" on the shelving unit were 1/4-20 bolts, so locking wheels with 1/4-20 threads worked perfectly. Then I hacked up some cross-bracing for stability.
  • Also, you need to buy a spindle and a VFD to control it. And a spindle cable to connect the VFD to the spindle. And then you need a 220V circuit for the VFD, and a power cable to connect that circuit to the VFD. The Huanyang VFD seems to be the one you find all over eBay. Everyone's got one.
  • Preferably the spindle has an ER20 collect chuck. You will need a set of ER20 collets.
  • Dust collection is absolutely necessary. You do not want to breathe in the fine particulates that the router generates. So, you can either get a complete dust collector, or you can do what most people do: make one out of a ShopVac with a HEPA filter (otherwise you're not getting the harmful fines), an Oneida Dust Deputy DIY cyclonic dust separator, a bucket and a Gamma Seal lid with some hoses to connect them all.
  • Don't forget end mills to actually cut with.
  • How about toolpathing? A G-Code file tells your router what path it should take, but you have to get from your design to G-Code. This is where V-Carve comes in. I decided to get V-Carve Pro. You get a discount from CNCRouterParts if you buy a router from them. However, you can probably get away with Fusion 360. It is free, runs on Windows and OSX, does parametric design (parametric or go home) and can output G-Code.

The kit is obviously not meant to be turn-key. It is not for the impatient or the easily frustrated. But it is cheaper than most solutions, and many of those solutions still need most of the above extra things.

All the things

The 220V circuit (in the US)

What amperage circuit is needed? Well, the most common spindle is 2.2kW, which means 10 amps (2200 divided by 220). You'll need a little extra to compensate for power loss in the cable, so figure 15 amps.

But wait! The age of gasoline is nearing an end, and wouldn't you like your house to be ready for an electric car? A car charger typically runs off 220V, and the higher amperage the better. So I opted to get a 50A circuit -- 40A for a Tesla charger, plus a bit for cable loss.

The plug gives you four wires: ground, a neutral, and two hots. While each slot in a breaker box is 110V, consecutive slots are 180 degrees out of phase with each other, which is why 220V breakers take up two slots. That's how you get 220V. The voltage between the neutral and any hot is 110V, because the neutral is the conductor between the two slots and the hots are on either side.

I'll be putting a 15A breaker in the circuit between the breaker box and the VFD, because I don't trust the VFD to have its own fuse.

The spindle cable

For some reason, this was the hardest thing to obtain. It needs to be four-conductor shielded cable. It must be shielded because without a shield, such a cable will spew RF all over the place, which is bad for nearby electronics. Like, the stuff that is controlling your motors. The shield must be connected to ground on both ends.

I went to surplus stores, and by extreme chance found some, but the wires were too thick to fit in the connector that came with the spindle. You can get some from Soigeneris, but only if they have it in stock.

For some reason, there seem to be only two makers of this kind of cable: Alpha and Lapp.

I got some from Element14: Lapp Kabel Ölflex servo cable, 4 conductor, 1.5mm wires (24M9570). They sell it by the meter. I ordered 10 units, but because I'm a stupid American, I thought it was by the foot. I ended up getting more than I needed, but better too much than too little.

I wish someone sold this cable with the connector already on it.

The VFD

It turns out that setting up the VFD was the hardest part of the whole project. For one thing, the VFD is very programmable. There are lots and lots of parameters you can set for all sorts of custom circumstances.

But mainly it was difficult because the instruction manual, nominally in English, is horribly written. You can search the webs and find lots of pages on how to set up the parameters in the VFD for your particular motor, but I've found some of the information to be wrong. So here is yet another page on how to set up a VFD, for a particular spindle. I'll try to explain what the parameters really mean.

Scary temporary testing. The shield has not yet been hooked up to ground.

But first, some myths

Here are a few myths I've found which just make no sense, and I really need to put these first. If you ever see these on a VFD page, don't pay attention to that section.

Myth #1: You need to set up the parameters in a particular order.

No. You don't. If you set PD013 to 8, that's factory reset. So of course, you would do that first. But you can set the other parameters in any order you like.

Myth #2: The max RPM of your motor should be divided by the value of parameter PD010, and that should be entered into PD144.

No. For some settings it is just coincidence that PD010 x PD144 = Max RPM. In reality, they have absolutely nothing to do with each other.

The spindle parameters

First, gather your spindle's operating parameters. If you bought one off eBay from China, you only get this data: power (kW), voltage (V), air or water cooled, max RPM. The spindle I bought is 2.2kW, 220V, air cooled, 24000 rpm max.

You're also supposed to know the spindle's maximum operating frequency. This is often 400Hz for the ones you get off eBay.

The VFD parameters

First, reset the VFD to factory settings. You don't know where that thing's been. On the front panel, hit PROG (or PRGM), and then the up and down buttons until you reach PD013. Hit SET. Change the value to 8 using the up and down buttons. Hit SET again. Now your VFD is reset.

For the next parameters, I've renamed them to make some kind of sense. For setting multi-digit values, use up and down to increase and decrease the value, and the >> key to move one digit to the right.

PD001: Command source. Set to 0. 0 means you're controlling the spindle via the front panel controls. 1 means you're using controls that you've wired up to the screw terminals. 2 means you're going to control it using RS-485.

PD002: Speed control source. Set to 1. 0 means you're controlling the speed through the up and down front panel buttons. 1 means you're going to control the speed with either the knob on the front or an external potentiometer. 2 means RS-485.

When PD002 is set to 1, there is also a jumper next to the screw terminals that you have to set. If the jumper is on the right pair, the control is the front panel knob. If the jumper is on the left pair, the control is via an external potentiometer connected to the screw terminals. Make sure the jumper is on the right-side pair.

By the way, I found setting 0 pretty weird. You only get to see the speed as a frequency, not as RPM.

PD003: Default frequency. If PD002 was set to 0, this is the frequency the motor will start running at. The frequency is directly related to the speed. Since we set PD002 to 1, we can leave this alone. But you can set it to something like 200 Hz to start at mid-range.

PD004: Rated frequency: Apparently this is for motors with a fixed frequency. Since the spindle is variable frequency, this setting can be ignored.

PD005 through PD010 set three points on a voltage/frequency curve. As the motor ramps up to your desired speed, it follows this curve. The manual usefully shows three types of curve: constant torque, low torque, and high torque. I've set mine to the values for the constant torque graph (why not).

I think that if you get a VFD with a spindle, the particular model of VFD comes with different factory settings for these depending on the spindle. Which is nice.

PD005: High-end frequency: 400 Hz

PD006: Middle frequency: 2.5 Hz

PD007: Low-end frequency: 0.5 Hz

PD008: High-end voltage: 220 V

PD009: Middle voltage: 15 V

PD010: Low-end voltage: 8 V

PD011: Minimum allowed frequency. Set to 120 Hz. Air-cooled spindles are not meant to stay at low speeds, otherwise they overheat. I understand that water-cooled spindles can go as slow as you want.

Leave the next parameters alone, and skip to...

PD070: Speed control input: Set to 1. This means that the speed will be controlled by an input voltage between 0 and 5V. This is what the front panel knob delivers. 0 means 0-10V. 2 means the control is by an input current between 0 and 20mA. 3 means 4-20mA. 4 is a combination of voltage and current.

PD071: Speed control responsiveness: Leave at the factory setting of 20.

PD072: High-end frequency: Set to 400. This sets the frequency represented by the top end of the speed control.

PD073: Low-end frequency: Set to 120. This sets the frequency represented by the bottom end of the speed control.

Now skip straight to...

PD141: Rated motor voltage: Set to 220V.

PD142: Rated motor current: Set to 11A. Why not 10? Because there will always be some loss in the spindle cable. This compensates for that. But feel free to set it to 10A. The worst that can happen is that your motor loses power at the top end.

PD143: Number of motor poles: Set to 4. This is the number of magnetic poles in the motor. It should be either 2 or 4, and is 4 for the 2.2kW spindle.

PD144: RPM at 50Hz: Set to 3000. Since the max RPM is 24000 at 400Hz, this means that the RPM at 50Hz will be 3000.

That's it!

Testing

Now twist the knob all the way counterclockwise so that you'll start at the lowest speed setting. You can now hit the RUN button and your spindle should start rotating clockwise if you're looking at it from above. If it rotates counterclockwise, press STOP, shut off the power, unplug the VFD, and swap any two of the motor wires. Then try again.

The display may now be showing a frequency rather than a rotational speed -- that is, the HZ light above the display may be lit. Hit >> until the ROTT light above the display is lit. That's RPM.

Now slowly turn the knob clockwise. You should get all the way to 24000 RPM.

Hitting >> until A is lit shows you the current being used by the motor. With no load, mine ran at 1.1A.

How I didn't crack the Voynich Manuscript

I’ve been interested on and off in the Voynich Manuscript, mostly because it appeals to my appreciation of old occult books. Looking at the book, it’s clear there is a lot of structure in its writing. It’s clearly not, say, solely a work of art such as the Codex Seraphinianus. The history of the Voynich and the cracking attempts against it can be found all over the place. I particularly enjoyed Nick Pelling’s book, Curse of the Voynich. It might be fun, I thought, to take a look at it myself.

First step was assigning letters to each glyph. There is a standard format called EVA that has been used to transcribe the Voynich, but I found it too cumbersome. Although EVA can be transformed to any other format due to its expressiveness, that transformation would also rely on an already-completed transcription. I really wanted to start with a fresh eye.

Warning! Yes, I am aware of other researchers’ theories. The idea is that I’ll use my own theory and see where it takes me. Maybe I’ll make a bad decision. In any case, I reserve the right to modify my theories!

Using the images of the Voynich as my starting point, I looked through the first few pages and settled on a basic alphabet of glyphs, closely modeled on EVA:

Voynich glyphs

Again, this is just my initial take: 25 glyphs. I’m not taking into account other, more rare glyphs at this point. Major differences between my notation and EVA are that my ‘g’ is EVA ‘m’, my ‘v’ is EVA ’n’, my ‘q’ is EVA ‘qo’, and my ‘x’ is EVA ‘ch'. I also specifically include an ‘m' and an ’n' symbol for EVA ‘iin’ and ‘in’, respectively.

The one ‘q’ I found with an accent above it, I just decided to render as ‘Q’. Likewise, the ‘x’ with an accent above I render as ‘X’. The tabled gallows characters ‘f’, ‘k’, ‘p’, and ’t’ are represented as capital letters. This doesn’t mean that they are the same character. This is just a mapping of glyphs to ASCII.

Next, I transcribed some pages using this mapping. I decided to use ‘^’ for a character indicating the beginning of a “paragraph”, including the beginning of any obviously disconnected text areas. Similarly, ‘$’ marks the end of a “paragraph”. A period is a space between “words”. Occasionally I had to make a judgement call as to whether there was a space or not. Furthermore, lines always end in either a period or, if the line is the last in a paragraph, an end-paragraph symbol.

Finally, any character that I could not figure out or that doesn’t fit in the mapping is replaced by an asterisk.

Thus, the first paragraph on the first page is transcribed as follows:

F1r

F1r t

Again, there are many caveats: I had to make judgement calls on some glyphs and spacing. I’m also aware that Nick Pelling has a theory that how far the swoop on the v goes is significant, and clearly I’m ignoring that.

In any case, after transcribing a few pages, I became aware of certain patterns, such as -am nearly always appearing at the end of a word, and three-letter words being common. I did a character frequency analysis, and found that ‘o’ was the most common character, with ‘y’, ‘a’, and ‘x’ being about 50% of the frequency of ‘o’. Way at the bottom were 'F', 'f', and 'Q'.

Then I did another frequency analysis, this time of trigrams. Most common were ‘xol’, ‘dam’, and ‘xor’. Then I asked, how often do the various trigrams appear at the beginning of a word and at the end of a word? The top five beginning trigrams were ‘xol’, ‘dam’, ‘xor’, ‘Xol’, and ‘xod’, while the top five ending trigrams were ‘xol’, ‘xor’, ‘dam’, ‘Xol’, and ‘ody'.

Are these basic lexemes? I began to suspect that lexemes could encode letters or syllables.

I turned to page f70v2, which is the Pisces page. There were 30 labels, each next to a lady. The labels were things like ‘otolal’, ‘otalar’, ‘otalag’, ‘dolarag’… Most looked like the words were composed of three two-letter lexemes: ‘ot’, ‘ol’, ‘al’, ‘ar’, ‘dol’, ‘ag’, and so on.

So I decided to look for more lexemes by picking the most common trigram, ‘xol’, and looking for words containing it. These were: ‘xol’, ‘otxol’, ‘o*xol’, ‘ypxol’, ‘dxol’, ‘xolam’, ‘xolo’, ‘xololy’, ‘xolTog’, ‘opxol’, ‘btxol’, ‘xoldy’, ‘xolols’. That would make as the lexemes ‘ot-’, ‘yp-‘, ‘d-‘, ‘-am’, ‘-o’, ‘-oly’, ‘-Tog’, ‘op-‘, ‘bt-‘, ‘-dy’, and ‘-ols’.

Similarly, for ‘xor’, we get lexemes ‘d-‘, ‘xeop-‘, ‘k-‘, ‘ot-‘, ‘bp-‘, ‘ok-‘, ’t-‘, ‘-am’, and ‘Xk-‘. At least in the first few pages.

So certain lexemes seemed to come up often, namely ‘ot-‘, ‘ok-‘, ‘op-‘, ‘d-‘, ‘-am’, ‘-dy’.

One possibility that no doubt has already been discounted decades ago is that each lexeme corresponds to a letter. So something common like ‘xol’ could be ‘e’. Because there are so many lexemes, clearly a letter could be encoded by more than one lexeme. Or maybe each lexeme encodes a syllable, so ‘xol’ could be ‘us’. I tend to doubt the polyalphabetic hypothesis, since then I would expect to see perhaps more uniform statistics.

Maybe the labels in Pisces encode numbers. If that’s the case, then by Benford’s Law, ‘ot-‘ would probably encode the numeral 1, since that appears in the first position 16 out of 30 times, and ‘ok-‘ could be 2, appearing 8 times.

Anyway, that’s as far as I’ve gotten. 

Machine Learning: Sparse RBMs

In the previous article on Restricted Boltzmann Machines, I did a variety of experiments on a simple data set. The results for a single layer were not very meaningful, and a second layer did not seem to add anything interesting.

In this article, I'll work with adding sparsity to the RBM algorithm. The idea is that without somehow restricting the number of output neurons that fire, any random representation will work to recover the inputs, even if that representation has no organizational power. That is, the representation learned will likely not be conducive to learning higher-level representations. Sparsity adds the constraint that we want only a fraction of the output neurons to fire.

The way to do this is by driving the bias of an output neuron more negative if it fires too often over the training set. Or, if doesn't fire enough, increase the bias. Octave code here. The specific function I added is lateral_inhibition.

I used the same data set based on horizontal and vertical lines, 5000 patterns. I settled on 67 output neurons, 500 epochs, with momentum, changing halfway through. I decided that a fraction of 0.05 would be interesting, meaning that on average I would want 67 x 0.05 = 3.35 output neurons activated over all 5000 patterns. In order to compensate for the tendency for biases to be very negative due to the sparsity constraint, I set the penalty for weight magnitudes to zero so that weights can become stronger to overcome the effect of the bias.

The change in bias is controlled by a sparsity parameter. I ran the experiment with various sparsity parameters from 0 (no sparsity) to 100, and here are the costs and activations:

LibreOfficeScreenSnapz003

LibreOfficeScreenSnapz004

The magnitude of the sparsity parameter doesn't seem to have much effect. Although you can't tell from the graph, there is in fact a small downward trend in the average active outputs. Right around 10, the average active outputs reaches the desired 3.35, where it stays up to about 80, and then it starts dropping again. So 10 seems like a good setting for this parameter.

Here are the patterns that each neuron responds to, with differing sparsity parameters:

 

Sparsity 0:

OctaveScreenSnapz008

Sparsity 1:

OctaveScreenSnapz009

Sparsity 5:

OctaveScreenSnapz010

Sparsity 10:

OctaveScreenSnapz011

Sparsity 50:

OctaveScreenSnapz012

Sparsity 100:

OctaveScreenSnapz013

Sparsity 200:

OctaveScreenSnapz014

With no sparsity, we get the expected near-random plaid patterns, and nearly all neurons have something to say about any given pattern. With even a little sparsity, however, the patterns do clean themselves up, although not by much, and by sparsity 200, the network learns nothing at all.

One possibility that the patterns really don't look that sparse is that we wanted the average neurons activated over the entire data set to be 5%. But how much of the data set actually contains a non-empty image? In fact, about 49% of the data set is empty.

What if we require that no data instance be empty? This time a sparsity of 0.05 ends up with a relatively terrible log J of -1.8, compared to the previous result of about -2.4. However, increasing the sparsity to 0.07 gives us a log J of -2.6, which is better than before. This is also expected, since more neurons will be able to represent patterns more closely. And yet, we get better representation anyway:

Sparsity 10:

OctaveScreenSnapz015

Sparsity 20:

OctaveScreenSnapz016

Sparsity 50:

OctaveScreenSnapz017

Sparsity 100 has very poor results.

The visualization is a bit misleading, because although there are pixels that are other than full white, those pixels don't imply that the neuron will be activated with high probability for those other pixels. The maximum weight turns out to be 11.3, with the minimum being -4.7. The visualization routine clips the values of the weights to [-1,+1] meaning that anything -1 or lower is black, while anything +1 or higher is white. However, by using visualize(max(W+c', -0.5)), we can take into account some of the threshold represented by the (reverse) bias from output to input. We also clip at -0.5 so that we can at least see the outlines of each neuron.

So here is another run with sparsity 50:

OctaveScreenSnapz018

We can see that, in fact, each neuron does respond to a different line, and that just about 18 lines are represented, as expected.

Reverse engineered part

After fiddling around with the part from the previous article, I think I might have a reverse engineered technical diagram. I still don't know enough about early 20th century mechanical design techniques to know if this is what they would have done, but it should be enough to at least remanufacture this part.

I also realized that I haven't actually described the part! There are two registers on the typical Monroe calculator, an upper register which indicates operation count (useful for multiplication and division) and a lower register which indicates total. There's a crank which, when turned one way, zeroes out the upper register, and when turned the other way, zeroes out the lower register. The part that I reverse engineered is shown in the original 1920 US patent 1,396,612 by Nelson White, "Zero setting mechanism" in Figure 5. In the patent, the part, 32, is described as follows:

 

The shaft 60 is normally locked or held against rotation by a rigid arm 32, pivoted upon the shaft 84, and at its free end engaging a peripheral notch 33, of a plate or disk 34, secured to the gear 12...

 

So the next step might be to make an OpenSCAD file for the part, and put it on Thingiverse so that anyone can recreate the part. It probably can't be 3D-printed at this point, since it really needs to be a metal part. Even Shapeways, which can 3D print metal parts from stainless steel combined with bronze, can only achieve a 1mm detail, and this thing is much more detailed than that.

Full-sized files in various formats: AI | PDF | SVG | PNG

UPDATE: See the thing on Thingiverse.

Carriage pawl reveng

Reverse engineering mechanical parts

Or,

Numerology that sorta kinda works!

One of my half-baked projects is to take apart an early 20th century Monroe mechanical calculator and reverse engineer it so that I have a full set of engineering diagrams of every part. This would enable anyone to recreate broken parts and fix their calculator.

Reverse engineering the design of an early 20th century mechanical part has a lot in common with numerology. If the numbers coincidentally fit, then they're probably right. If they almost, but not quite fit, then they're probably right anyway.

Here's a part that I scanned on an Epson Perfection V700 scanner. This scanner is based on a CCD, not LiDe, which means that it has non-zero depth of field. That means that you can scan a part that has height and it won't end up too blurry. I scanned the part at 1400 dpi so that I could optically measure it. The thing sticking out at the top is just a screwdriver that I used to hold the part horizontal.

Lower register pawl

I could pop this into Illustrator and use the pen tool to trace around the part, but all this would get me is an outline of this particular part with no insight into why it had that particular outline. This part was designed, not evolved. It was designed to work with other parts. So clearly its measurements and the relationships between one bit of the part and another are not arbitrary.

For example, take the hole at the top. It fits over a shaft. Now, "the ancients" probably didn't use shafts of arbitrary diameter. They were standard, and since this was an American design, that meant fractions of an inch, specifically inverse powers of two: 1/2, 1/4, 1/8, and so on. The hole at the top measures between 0.187" and 0.188" on my calipers. But 3/16" is 0.1875", so it makes sense that the engineers designed this hole to be exactly 3/16". This fits around the shaft that is 0.001" under 3/16", which I suppose is a standard undersized shaft.

The 1910 Cyclopedia of Mechanical Engineering, edited by Howard Raymond, has this to say on page 129 in the section on mechanical drawing: "Keep dimensions in even figures, if possible. This means that small fractions should be avoided… Even figures constitute one of the trade-marks of an expert draftsman. Of course a few small fractions, and sometimes decimals, will be necessary. Remember, however, that fractions must in every case be according to the common scale; that is, in sixteenths, thirty-seconds, sixty-fourths, etc.; never in thirds, fifths, sevenths, or such as do not occur on the common machinist's scale."

In Illustrator, I pulled up the image and drew a circle of diameter 3/16", placing it so that it fit exactly into the hole in the image. Now I had the center of that hole, and I could draw more concentric circles. Because I could measure these diameters directly on the part, I used those diameters: 3/8" and 7/16".

Adobe Illustrator CS6ScreenSnapz002

The measurements in the image were done using VectorScribe.

Note that while the inner circle and middle circle (diameter 3/8") fit exactly, the outer circle (7/16") does not. The outer circle does not seem to be quite concentric, but numerology: if it's nearly right, it probably is. By moving the outer circle a few thous, I was able to get a good registration. Under high magnification, I was able to tell that the inner subpart was welded onto the sheet metal subpart, so all this indicates that there were several steps involved in manufacturing this part: first, turn the small subpart on a lathe. Then create the larger subpart from sheet metal. Then weld the two together. Welding the two together was apparently not an extremely exact procedure.

After moving the large circle to its new center, the centers no longer coincide.

Adobe Illustrator CS6ScreenSnapz003

Now for the rest of the part. Using SubScribe, I drew a circular arc on a circular-looking feature. Then I measured its radius.

Adobe Illustrator CS6ScreenSnapz004

0.283" x 2 = 0.566" is close enough to a diameter of 9/16" to say that this was the intent of the original engineer. I drew the circle, and then measured the distance between the centers.

Adobe Illustrator CS6ScreenSnapz005

0.568" is again close enough to 9/16". And not coincidentally, this second center coincides precisely with the location of another shaft on the machine. That certainly nails down the intent of the engineer.

I can now draw the inner tangent line between the two circles (done again using SubScribe):

Adobe Illustrator CS6ScreenSnapz006

The length and angle of this line in fact do not matter, since there is one and only one inner tangent line connecting these two circles in the right direction. Certainly 0.269 is close to 7/64, but that was not the design constraint. The line had to be tangent to the two circles, and drawing inner and outer tangent lines were geometric constructions that were familiar to the ancients.

We can now draw another concentric circle corresponding to the outer outline of the part, and draw an outside tangent line. Again, knowledge of design intent lets us set the outer circle's diameter at 7/8", which seems to fit precisely onto the part.

Adobe Illustrator CS6ScreenSnapz007

The more excitable among you may have noticed by now that the inner surface of the inner tab appears to be a circular arc, and you would be right. Drawing the circle freehand gives us a diameter of 0.439", which is close enough to 7/16" as to fix that measurement. But right now I won't analyze the tab, since I want to get the larger part done.

Near the bottom of the part, we can draw some tangent lines.

Adobe Illustrator CS6ScreenSnapz008

I did this in SubScribe by picking a point on the straight section of the part, then drawing a line tangent to the circle. Then I extended the line outwards. Now those lines could have started anywhere on the circle. Why these particular points? Let's draw some lines intersecting the centers. I'll also rotate the diagram so that the inter-center segment is horizontal.

Adobe Illustrator CS6ScreenSnapz009

The angle that the rightmost tangent line forms with the horizontal, 67.35 degrees, is irrelevant, since the constraint for that intersection point was based on an outer tangent line. But consider the angle formed between that angle and the next intersection: 125.71 - 67.35 = 58.36 degrees. This is close to 60 degrees, a nice round angle. For the intersection in the inner circle, the angle is 171.76 - 67.35 = 104.41, which is very close to 105 degrees, which is 60 + 45, more round angles. So the design intent seems clear: the outer intersection is 60 degrees from the outer tangent line, while the inner intersection is 45 degrees away from that. Let's move the intersections and construct the tangent lines so these relations become exact.

Adobe Illustrator CS6ScreenSnapz010

As mentioned above, this pawl fits into a hole on a gear located on a shaft. There are three shafts so far, let's call them A, B, and C. The pawl fits on shaft A, goes around shaft B, and the gear it locks is on shaft C. We know that the distance between shaft A and shaft B is 9/16". I also know from direct measurement that the distance between shaft C and shaft B is also 9/16". However, the distance between shaft A and shaft C is irregular: 0.977", not close to any fraction at all. This may be due to some constraint that we do not yet know about.

However, let's pretend that 0.977" is eventually determined through some constraint, and place the location of shaft C on the diagram.

(Update, 17 Dec 2012: It turns out that a line drawn from B perpendicular to A-C has a length of very close to 9/32", which makes A-C tangent to the 9/16" diameter circle around B. Maybe that's why the shafts are where they are: A-B is 9/16", B-C is 9/16", and A-C is tangent to the 9/16" diameter circle around B.)

Adobe Illustrator CS6ScreenSnapz012

I also put a circle of diameter 3/16" (shaft C's diameter), and another of diameter 3/8" around shaft C's center, which corresponds to the size of shaft C's bushing where the hole is. You can imagine the pawl fitting into a hole on the bushing by looking at the diagram.

It seems fairly clear that the pawl's end is designed to fit into the hole. The end also isn't square; it is tapered. Remember that inner tab? There is a cam which that inner tab rides on. The large diameter for the cam measures 17/32", and the small diameter measures 29/64". When the cam is rotated so that the large diameter pushes the inner tab, the pawl lifts out of the hole. When the cam small diameter is against the tab, the pawl is inside the hole. I can add the two cam diameters and then rotate the image of the pawl to simulate the two states.

In the hole (locked state):

Adobe Illustrator CS6ScreenSnapz016

Out of the hole (unlocked state)

Adobe Illustrator CS6ScreenSnapz017

Clearly one design criteria we can deduce is that the width of the pawl's end at the outside of the hole when the pawl is in the locked state must be equal to the width of the hole, and the pawl must thereafter taper. Measuring the width of the pawl in the locked state at the hole gives 0.081". Perhaps not surprisingly, this is the diameter of the hole as measured with calipers. In fractions, this is near enough to 13/16", a drill size that any mechanical engineer would have had.

Adobe Illustrator CS6ScreenSnapz018

Here I've drawn the outline of the hole along with its centerline. I've made the depth just deep enough for the pawl's end. We can see that the tapered pawl end does indeed fit in the hole.

Adobe Illustrator CS6ScreenSnapz019

Resetting the part to its design position, I found that I could draw a line along the outer outline of the pawl tangent to shaft C's outline:

Adobe Illustrator CS6ScreenSnapz020

The angle formed between the C-B line and the beginning of the construction line is 129 degrees. It is not a very round angle, and if the angle were not too important, it would make more sense to have it be round, perhaps divisible by 5.

Another possibility is to look at the angle formed by a radial line with the intersection:

Adobe Illustrator CS6ScreenSnapz021

Relative to our reference angle at the right, this is 91.57 degrees. Too far away from 90 degrees; a line placed at 90 degrees intersects the upper outline nowhere near the right place. The radial line is also 31.57 degrees from the 60-degree line. This could be significant, since 31.5 is exactly 7/10 of 45 degrees, and placing a radial line at 31.5 degrees produces an intersection very nearly at the drawn intersection.

I don't know enough about early 20th century mechanical design techniques to know if this would be reasonable: angles measured in tenths of 45 degrees.

If you'd like to have a try at figuring this out, here's the Illustrator file.

Machine Learning: Restricted Boltzmann Machine

The Restricted Boltzmann Machine is an autoencoder which uses a biologically plausible algorithm. It uses a kind of Hebbian learning, which is the biologically plausible idea that "neurons that fire together, wire together".

Suppose we have an input layer of dimension Nin and an output layer of dimension Nout, with no hidden layer in between. All input units are connected to all output units, but input units are not connected to each other, and output units are not connected to each other, which is where the Restricted comes in the name.

Let the input units be binary, so either 0 or 1. Further, each output unit uses the logistic function, so that the output of unit j is:

bj is the bias term for output unit j, and wij is the weight from input unit i to output unit j. Note that we're treating the logistic function as a probability, but this time the outputs are binary rather than the probability, so that an output unit j is on with probability pj. That is, the output unit is stochastic.

Now, to make this an autoencoder, we want to feed the outputs backwards to the inputs, which works as follows:

We're just doing the same thing in reverse, except that there is a bias ci now associated with each input unit. There is some evidence that this kind of feedback happens biologically.

After this downward pass, we perform one more upward pass, but this time using the probability input rather than the reconstructed input:

We do this for every input sample, saving all the values for each sample. When all the m samples have been presented (or, in batch learning, when a certain proportion m of the samples have been presented) we update the biases and weights as follows:

And that's the Restricted Boltzmann Machine. There are other refinements such as momentum and regularization which I won't cover, but which are implemented in the example Octave file. There are also many extremely helpful hints as to parameter settings in Hinton's paper A practical guide to training restricted Boltzmann machines.

As an example (Octave code here), I set up a 9x9 array of inputs which correspond to pixels (0=off, 1=on), so my input dimension is 81. I generate a bunch of samples by placing some random vertical and horizontal lines in, but with more lines being less likely. According to the paper, a good initial guess as to the number of output units is based on the number of bits a "good" model of the input data would need, multiplied by 10% of the number of training cases, divided by the number of weights per output unit.

Each sample has zero horizontal lines with probability 0.7, one horizontal line with probability 0.21, and two horizontal lines with probability 0.09, with a vanishingly small probability of three lines, and likewise with vertical lines, and each line can be at any of nine positions, which means that a "good enough" model of the input would need something like 9 bits for the horizontal lines and 9 bits for the vertical lines, or 18 bits. With 3000 or so samples, 300 x 18 / 81 = 67 output units.

I also used 2000 epochs, averaging weights through time, a momentum term which starts at 0.5, then switches to 0.9 halfway through, a learning rate which starts at 1, then switches to 3 halfway through, and a regularization parameter of 0.001.

I set aside 10% of the samples as cross-validation, and measure the training and cross-validation costs. The cost (per sample per pixel) is defined as:

In other words, this is the usual logistic cost function, except that instead of the output, we're using the probability during the downward pass that an input pixel is 1.

Here are 16 example input samples:

Samples

And here are the trajectories of the training and cross-validation costs over the 2000 epochs. Strictly speaking, the cost is a better measure than error in the reconstructed input since it is not as affected by chance pixel flips during reconstruction (output to input) as a direct pixel-to-pixel comparison. For example, if an input pixel should be on, and it is only turned on 90% of the time during reconstruction, then the cost for that pixel is -ln 0.9 =  0.105. If we were to actually reconstruct that pixel, then 10% of the time the error would be 1, and 90% of the time the error would be 0; but we only do a single evaluation. So the cost gives us a better idea of how likely a pixel is to be correct.

Logj layer1

Blue is the training cost, while red is the cross-validation cost. We can see that the cross-validation cost is a little higher than the training cost, indicating that there is likely no over fitting or under fitting going on. The sudden acceleration at epoch 1000 is due to changing the momentum at that point. The end cost is about 0.004 while the cross-validation cost is about 0.005.

Here is a view of what the weights for each of the 67 output units encode:

Weights1

Interestingly, only one output unit seems to encode a single line. The others all seem to encode linear combinations of lines, some much more strongly than others. The data shows that on average, 38 of the 67 output units are active (although not all the same ones), while at most 51 are active (again, not all the same ones).

Varying the number of output units affects the final cost, apparently with order less than log N. The end cost for 200 units is about 0.002, as is the cross validation cost. The average number of activated outputs appears to be a little over half.

Logjn layer1

Act layer1

We can learn a second layer of output units by running all the input samples through to the first output layer, and then using their binary outputs as inputs to another RBM layer. We would be using probabilistic binary outputs, so it is important to have enough samples that the next layer gets a good idea of what the input distribution is. We can use the probability outputs directly, but I've found, at least with this toy problem, that this doesn't seem to lead to significantly better results.

To try this, I'll use 100 units in the first layer, which could be overkill, and a variable number of units in the second layer, from 10 to 200. To get the cost, I can run the output all the way back to the input layer. Here's the result in log cost per pixel:

J layer2

So this isn't so good: the error rate for the second layer is much higher than that for the first layer. One possibility is that the first layer is so good that the second layer is not necessary at all. But then I would have thought we would get at least the same error rate.

That's where I'll leave this article right now. Possible future investigations would be more complex inputs, and why layer 2 refuses to be as good as layer 1.

My Cabinet of Obsolete Technologies

Over the years, I've bagged some technological items from before I was born, and some nostalgic items from the 80s.

 

DSC00114

A Radio Shack 40-155 "Personal Stereo Speaker System",
a 1985 Sony WM-F12 Walkman with headphones,
and a 1982 Radio Shack PC-2 Pocket Computer in its case.

 

DSC00123

1980 Sound Gizmo and 1980s Merlin


DSC00118

Arithma Addiator (with its case and stylus), a film sprocket thing,
a Burroughs punched card printing plate for CanTabCo, and small wooden slide rule


DSC00119

Modern fakes, but in the corner is a roll of J. L. Hammett aluminum foil
for a mimeograph 


DSC00120

Vacuum tubes, CRTs, voltmeter


DSC00122

Dymo label machine, photomultiplier tube,
some individually wrapped screws for the Air Force 

Machine Learning: Autoencoders

An autoencoding algorithm is an unsupervised learning algorithm which seeks to recreate its input after processing the output. The layer or layers between input and output then become a representation of the input, which may have fewer or more dimensions than the input.

If the internal layer is further restricted so that only a very few of its components are active for any given input, then it is a sparse autoencoder. Generally, if the dimension of the internal layer is less than that of the input layer, then the autoencoder is performing, appropriately enough, dimension reduction. If, however, the number of dimensions is greater, then we enter the realm of feature detection, which, to me anyway, is a much more interesting application of autoencoding. In addition, feature detection appears to be how the brain handles input.

One of the challenges of feature detection is to ensure the internal layers don't degenerate to a trivial representation of the input, that is, simply repeating the input so that each feature is simply an input feature.

I'll start by talking about autoencoding via backpropagation. Before we tackle this, I'd like to rehash the mathematics of backpropagation, but this time in matrix form, which will be much easier to handle. So feel free to skip if you're not really interested.

 

Backpropagation, a more thorough derivation

We start with the same diagram as before:

Neural network

This time, however, we'll use matrix notation. The equation for the vector of activations for layer l is as follows:

where:

  • a(l) is a column vector of sl elements (i.e. an sl x 1 matrix), the activations of the neurons in layer l,
  • b(l) is a column vector of sl elements (i.e. an sl x 1 matrix), the biases for layer l, equivalent to a fixed input 1 multiplied by a bias weight, separated out so we don't have to deal with a separate and somewhat confusing input augmentation step,
  • W(l-1) is an sl x sl-1 matrix for the weights between layer l-1 and layer l, and
  • g is a squashing function, which we can take to be the logistic function (for range 0 to 1) or the tanh function (for range -1 to 1). Or really any differentiable function.

A quick sanity check for z = Wa + b: W is sl x sl-1, a is sl-1 x 1, so multiplying W x a cancels out the middle, yielding sl x 1, which is consistent with the definitions for z and b.

Now, the cost function for a single data point x(i),y(i) is as follows:

|| a - y || is simply the Euclidean distance between a and y, otherwise known as the L2 norm. Note also that it is a scalar, and not a vector or matrix.

The cost over all data points, and adding a regularization term, is:

That last term simply means to take every weight between every neuron and every other neuron in every layer, square it, and add. We don't take any of the bias terms into the regularization term, as usual.

Now, first, we want to determine how gradient descent moves W(L-1) and b(L):

This just says that we move W downhill in "J-space" with respect to W, and the same with b. Note that since W(L-1) is an sL x sL-1 matrix, then so too must the derivative of J with respect to W(L-1) be. And now let's compute those derivatives. First, the derivative with respect to the weights in the last layer:

Note that we just called the derivative of g with respect to its argument, g'. For the logistic and tanh functions, these are nice, compact derivatives:

Since the argument of g (being z(L,i)) is an sL x 1 matrix, so too is its derivative. a(L-1,i) is an sL-1 x 1 matrix, its transpose is a 1 x sL-1 matrix, and thus g' x a is an sL x sL-1 matrix, which is consistent with what we wanted the size of the derivative of J with respect to W(L-1) to be. 

And now with respect to the bias on the last layer:

Let us define:

Note that this is an sL x 1 matrix. It is the contribution to the weight or bias gradient due to an "error" in output. We can now define our derivatives more compactly:

Now, what about the derivatives with respect to the previous layer weights and bias? The key insight in backpropagation is that we can generalize these derivatives as follows. For l from L to 2 (we start from L because these are recursive equations) we have:

A rigorous mathematical treatment for this is so completely outside the scope of this article as to be invisible :) But the general argument is that delta represents the contribution of a layer to the gradient based on the error between desired output and generated output. For the final layer, this is straightforward, and we can directly calculate it. However, for an internal layer, it is as if the errors from the next layer have propagated backwards through the weights, and so we can calculate, from output to input, the contributions of each layer.

 

Backpropagation, the algorithm

First, zero out an accumulator for each layer. The accumulators have the same dimensions as the weight and bias matrices. So for l from 2 to L:

Second, compute all the forward activations a for a single data point. So, for l from 2 to L, we have:

Compute the delta terms for l from L to 2, and add to the accumulators:

Next, after doing the above two steps for each data point, we compute the gradients for l from 2 to L:

Finally, we use these gradients to go downhill, for l from 2 to L:

That is one round of updates. We start from zeroing out the accumulators to do the next iteration, and continue until it doesn't look like the cost is getting any lower.

Instead of the above, we could provide a function which, given W and b, computes the cost and the derivatives. Then we give that function to a library which does minimization. Sometimes minimization libraries do a better job at minimizing than manually doing gradient descent, and some of the libraries don't need a learning parameter (alpha).

 

Adding a sparseness criterion

The whole reason for going through the derivation and not going straight to the algorithm was so that we could add a sparseness measure in the cost function, and see how that affects the algorithm.

First, if we have d dimensions in the input, then an autoencoder will be a d:1:d network.

We will first determine the average activation of layer 2 over all data points:

Note that this is an s2 x 1 matrix. To be sparse, we want the values of each element to be very low. If we're using a logistic function, this means near to zero. If we're using the tanh function, near to -1, but we will rescale the average activation to lie between 0 and 1 by adding 1 and dividing by 2.

Let us denote our target sparsity for each element as ρ, so that we want our measured sparsity to be close to that. Clearly we don't want ρ=0, because that would give us a trival solution: zero weights everywhere.

For a sparsity cost, we will use the following measure, known as the Kullback-Leibler divergence:

Note that the sum applies element-by-element to the measured sparsity vector, and so the cost is a scalar. This cost is zero when each measured sparsity element is equal to the desired sparsity, and rises otherwise.

We add this cost to our main cost function as follows:

where β is just a parameter whereby we can tune the importance of the sparsity cost.

Without going through the derivation, we use the following altered delta for layer 2 during backpropagation:

That scalar term added due to sparsity is computed only after all data points have been fed forwards through the network, because that is the only way to determine the average activation of layer 2. It is independent, therefore, of i. So the modification to backpropagation would require this:

  1. Going through the data set, compute all the way up to layer 2, and accumulate the sum of the activations for each neuron in layer 2.
  2. Divide the sums by m, the number of points in the data set.
  3. Perform one iteration of backpropagation.
  4. Go back to step 1.

 

Why I don't like this

It is said that backpropagation is not biologically plausible, that is, it cannot be the algorithm used by the brain. There are several reasons for this, chief among which is that errors do not propagate backwards in the brain.

A sparse backpropagating autoencoder is doubly implausible, because not only does it rely on backpropagation, but it also requires that we wait until all data points are presented before determining the average activation. It would be much nicer if we had something more biologically plausible, if only because I have the suspicion that any algorithm that is not biologically plausible cannot lead to human-level intelligence.

So in the next article, I'll talk about a biologically plausible algorithm called the reverse Boltzmann machine.

Machine Learning: K-Means Clustering

K-means clustering is the first unsupervised learning algorithm in this series. Unsupervised means that the answer is not available to the learning algorithm beforehand, just the cost of a potential solution. To me, unsupervised learning algorithms are more exciting than supervised learning algorithms because they seem to transcend human intelligence in a way. An unsupervised learning algorithm will seek out patterns in data without any (or with few) hints. This seems especially important when, as the human, we don't know what the hints could possibly be.

The Google "Visual Cortex" project shows how powerful unsupervised learning algorithms can be: from millions of unlabeled images, the algorithm found generalized categories such as human faces and cat faces. It is easy to see that if the same thing could be done with an audio stream or a text stream, the streams could be combined at a high enough level for association to produce sounds and text for images, images for text and sounds, and at a high enough level, reasoning.

The K-means clustering algorithm treats data as if it were in clusters centered around some number of points k, one cluster per point. Conceptually, the algorithm picks k centroid points, assigns each point in the data to a cluster based on how close it is to which cluster's centroid, moves each centroid to the center of its cluster, and repeats. The result is a set of centroids which minimizes the distances between each point and its associated cluster's centroid.

The cost function is:

where Ci is cluster i, and μi is the centroid for cluster i.

There are a few methods for picking the initial centroids. One method, the Forgy method, involves picking k random points from the data set to be the initial centroids. Another method, the Random Partition method, assigns each data point to a random cluster, then produces the initial centroids for each cluster. Regardless of the initial method, the algorithm proceeds by repeating the following two steps:

First, produce the clusters by assigning each data point to one cluster. This means comparing the distance of a point to each centroid, and assigning the point to the cluster whose centroid yields the lowest distance.

Second, calculate the centroid of each resulting cluster.

Repeat these steps until the total cost does not change.

 

A Concrete Example

I implemented the above algorithm in Java and ran it on the usual concrete strength data set. As usual, I set aside 20% of the data set as a cross-validation. But a problem quickly became apparent: how many clusters should I use? Clearly the more clusters, the less the overall cost would be simply because there would be more centroids.

One solution is to try different numbers of centroids and ask if there is an obvious point where there is not a lot of improvement in the cost. Here is what I found from k=2 through 10:

Kmeans

Blue is the training cost, while red is the cross-validation cost. Interestingly, the cross-validation cost was always below the training cost, indicating that the cross-validation points represent well the training points. There is clearly no overfitting because there is no large gap between costs. However, there is no obvious point at which increasing the number of clusters doesn't help much.

The other solution to the number of clusters relies on evaluating different numbers of clusters only after later processing. If downstream processing works better with a certain number of clusters, then that number of clusters should be chosen. So, for example, if I put each cluster's data through a neural network, how good is the error for each number of clusters?

I trained an 8:10:10:1 neural network on each cluster of points, so k=2 had 2 networks, and k=10 had 10 networks. I used fewer hidden neurons than before, on the theory that each cluster has less data, meaning that I can probably get away with a smaller parameter space. Here are the results:

Kmeansnn

Clearly the more clusters and the more networks, the better the output. Perhaps because more networks means smaller clusters, which in turn means less variation to account for. Interestingly, 8 clusters works about as well as 5 clusters, and it's only with 9 and 10 clusters that more advantage is found. In any case, choosing k=10, here are the errors:

Kmeansnn training

Kmeansnn cv

Compared to training a single 8:20:20:1 network on the entire data set, clustering has definitely reduced the errors. Most errors in the training set are now under 5% (down from 10% before), and even the one troubling point from before (error 100%) has been knocked down to an error of 83%. The low errors in the cross-validation points -- which, remember, the network has never seen -- all lead us to believe that the networks trained are not overfit.

I would still want to look at those high-error points, perhaps even asking for the experimental data for those points to be rechecked or even rerun. But for now, I would be happy with this artificially intelligent concrete master.

For the next article, I'm going to go off the syllabus of the Machine Learning course, and talk about one of my favorite unsupervised learning algorithms, the autoencoder.

Machine Learning: Feedforward backpropagation neural networks

If we take a logistic function as in logistic regression, and feed the outputs of many logistic regressions into another logistic regression, and do this for several levels, we end up with a neural network architecture. This works nicely to increase the number of parameters as well as the number of features from the basic set you have, since a neural network's hidden layers act as new features.

Neural network

Each non-input neuron in a layer gets its inputs from every neuron from the previous layer, including a fixed bias neuron which acts as the x0 = 1 term we always have.

Rather than θ, we now call the parameters weights, and the outputs are now called activations. The equation for the output (activation) of neuron p in layer l is:

Breaking it down:

  • w(l-1)pq is the weight from neuron p in layer l-1 to neuron q in layer l
  • a(l-1)pq is the activation of neuron p in layer l-1, and of course when p=0, the activation is by definition 1.
  • z(l)is the usual sum, specifically for neuron q in layer l.
  • g is some function, which we can take to be the logistic function.

So we see that the output of any given neuron is a logistic function of its inputs.

We will define the cost function for the entire output, for a single data point, to be as follows:

Note that we are using the linear regression cost, because we will want the output to be an actual output rather than a classification. The cost can be defined using the logistic cost function if the output is a classification.

Now, the algorithm proceeds as follows:

  1. Compute all the activations for a single data point
  2. For each output neuron q, compute:

  3. For each non-output neuron p, working backwards in layers from layer L-1 to layer 1, compute:

  4. Compute the weight updates as follows:

 

The last step can, in fact, be delayed. Simply present multiple data points, or even the entire training set, adding up the changes to the weights, and then only update the weights afterwards.

Because it is extraordinarily easy to get the implementation wrong, I highly suggest the use of a neural network library such as the impressively expansive Encog as opposed to implementing it yourself. Also, many neural network libraries include training algorithms other than backpropagation.

 

The Concrete Example

I used Encog to train a neural network on the concrete data from the earlier post. I first took the log of the output, since that seemed to represent the data better and led to less network error. Then I normalized the data, except I used the range 0-1 for both the days input and the strength output, since that seemed to make sense, and also led to less network error.

Here's the Java code I used. Compile it with the Encog core library in the classpath. The only argument to it is the path to the Concrete_Data.csv file.

The network I chose, after some experimentation checking for under- and overfitting, was an 8:20:10:1 network. I used this network to train against different sizes of training sets to see the learning curves. Each set of data was presented to the network for 10,000 iterations of an algorithm called Resilient Backpropagation, which has various advantages over backpropagation, namely that the learning rate generally doesn't have to be set.

Learning curve neural

As before, the blue line is the training cost, the mean squared error against the training set, and the red line is the cross-validation cost, the mean squared error against the cross-validation set. This is generally what I would expect for an algorithm that is neither underfitting nor overfitting. Overfitting would show a large gap between training and cross-validation, while underfitting would show high errors for both. 

If we saw underfitting, then we would have to increase the parameter space, which would mean increase the number of neurons in the hidden layers. If we saw overfitting, then decreasing the parameters space would be appropriate, so decreasing the number of neurons in the hidden layers would help.

Since the range of the output is 0-1, over the entire training set we get an MSE (training) of 0.0003, which means the average error per data point is 0.017. This doesn't quite tell the whole story, because if an output is supposed to be, say, 0.01, and error of 0.017 means the output wasn't very well-fit. Instead, let's just look at the entire data set, ordered by value, after denormalization:

Errors training neural

Errors cv neural

The majority of errors fall under 10%, which is probably good enough. If I were concerned with the data points whose error was above 10%, I might be tempted treat those data points as "difficult", try to train a classifier to train data points as "difficult" or "not difficult", and then train different regression networks on each class.

The problem with that is that I could end up overfitting my data again, this time manually. If I manually divide my points into "difficult" and "not difficult" points, then what is the difference between that and having more than two classes? How about as many classes as there are data points?

What would be nice is if I could have an automatic way to determine if there is more than one cluster in my data set. One clustering algorithm will be the subject of the next post.

Machine Learning: Linear Regression Example: Concrete

There is a fun archive of machine learning data sets maintained by UC Irvine. For a concrete example, let's take the Concrete Compressive Strength data set and try linear regression on it. (Get it? Concrete? Ha ha ha!) There are 1030 points in the data set, eight input features, and one output feature. Here is the basic info:

Feature #

Name Range
1 Amount of Cement (kg/m3) 102 - 540
2 Amount of Blast Furnace Slag (kg/m3) 0 - 359.4
3 Amount of Fly Ash (kg/m3) 0 - 200.1
4 Amount of Water (kg/m3) 121.75 - 247
5 Amount of Superplasticizer (kg/m3) 0 - 32.2
6 Amount of Coarse Aggregate (kg/m3) 801 - 1145
7 Amount of Fine Aggregate (kg/m3) 594 - 992.6
8 Mixture Age (days) 1 - 365
Output Compressive Strength (MPa) 2.3 - 82.6

In keeping with the principle that the ranges of the features should be scaled to the range (-1, 1), we will subtract the midpoint of each range from each feature and divide by the new maximum. So, for example, the midpoint of feature 1 is 321, so subtracting brings the range to (-219, 219), and dividing by 219 brings the range to (-1, 1).

Here's a plot of feature 1 versus the output. There's a lot of variation, but it does sort of look roughly correlated.

Feature1 graph

Here's the Octave code: concrete_regression.m. You'll also need to open Concrete_Data.xls and export to CSV to Concrete_Data.csv so that Octave can read the file. Place both Octave and CSV files in the same directory, change to that directory, run Octave, and then call concrete_regression().

Here is the learning curve and the parameters found. The training cost is in blue, while the cross-validation cost is in red.

Learning curve

The training and cross-validation costs are very close to each other, which is good. It means that the learned parameters are quite representative of the entire data set, so there is no overfitting.

However, the cost appears to be quite high: about 0.037. This means that the output is, on average, off by 0.27. Which is, by nearly any standard, terrible. Clearly there is some intense underfitting going on, and the only remedy is to get more features and more parameters.

But we could combine the features in infinite ways. How are we going to find good ways to combine the features? We'll take a look at neural networks in the next article, which essentially combines the features for us, and gives us more parameters as well.

Machine Learning: Linear and Logistic Regression Unified

Warning: very twisty math ahead. Feel free to skip.

Linear and logistic regression use different cost functions:

The real question is, why are the cost functions so different? It turns out that we can derive the cost functions from the same principles.

We start by claiming that whatever function (that is, model of reality) we choose, the outputs in the data set are based on that function of the inputs, plus some randomness. Nearly everything in reality is probabilistic to a greater or lesser extent. This is the whole basis of the field of statistics.

So let's write out the relationship between y, the actual output, and h, the estimated output:

In the above equation, ε(i) represents the error between y(i) and h(i). ε is, therefore, a random variable. Remember that we are assuming a reality that is probabilistic, which means that given the inputs x(i) reality will generate y(i) with some probability. The goal of the model is to get rid of as much of the random variation as possible, leaving us only with some small error. The claim is that this error always has mean 0 and is Gaussian. This simply means that our model's output is centered smack in the middle of the probability distribution for reality's output, and that a real output farther away from the mean is less likely.

Now the probability distribution of ε(i) given a particular data point x(i) and a particular set of parameters θ (because, after all, it is our choice of data point and parameters that leads to the error) is, as we said, Gaussian with mean 0, and some standard deviation σ. We're going to assume that the standard deviation in the output is the same no matter where in the input space we are. The technical term for this is homoscedasticity (homo-, meaning the same, and Greek skedasis, meaning a dispersal) so now you can bring that up at parties.

We write the probability density:

Now, note that y is simply ε plus a function of x, which is just another way of saying that the output of reality is probabilistic, but specifically, given our model, it must be Gaussian:

Now, let's find the probability for the entire data set. This is called the likelihood, and of course it still depends on our choice of parameters. This is just the probabilities of each of the data points, multiplied together:

And now here is the key: this is precisely what we need to maximize. We want to maximize the probability that we get our data set output given the inputs and our parameters. Or, we want to get, as it is known, maximum likelihood.

Now, since the logarithm maintains the property that it is monotonically increasing, that is, log(x) > log(y) if x > y, we can take the log of the probability and maximize that. This just makes the later math easier:

Since this is maximization, we can feel free to get rid of any additive and positive multiplicative constants:

arg maxθ just means the value of θ which maximizes. In the last step, I've simply turned the maximization problem into a minimization problem by reversing the sign. And so we see that this is exactly, except for a constant, the cost function for linear regression.

For logistic regression, our interpretation of h was that it is the probability that the data point is in the class. Again, we're assuming that the actual class is probabilistic, but here our model directly tells us the probability of the output being what it was in the data set. Because of that interpretation, we don't need to mess around with Gaussian errors, and we can go directly to the probability distribution:

And now, the probability that we get our data set is just the probability that each of our outputs classifies the data points correctly is (using one particular formulation out of many possible that is nice when we take logs later):

Taking logs and maximizing/minimizing:

And this is exactly, except for a constant, the cost function for logistic regression.

So in summary, what we've done is try to get the probability that we get the data set's outputs given the data set's inputs and a choice of parameters. This is what we want to maximize.

Finding this probability in turn depends on finding the individual probabilities for each data point. In logistic regression, this is a direct consequence of the definition of h, but for linear regression it is based in the assumption that the output, once we subtract out our model, is Gaussian distributed with mean 0.

Once we have the overall probability, we seek to maximize it, and taking logs and changing sign to turn it into a minimization gets us our cost function.

Machine Learning: Logistic Regression

Logistic regression is like linear regression, except the output is run through a squashing function called the logistic function. The logistic function rescales the output to the interval (0,1). Because of this property, logistic regression is useful for classification problems, where the data set's output feature is 0 if the point is not in the class, or 1 if the point is in the class. The interpretation of the hypothesis function then is that it is the probability that the point is in the class.

Logistic

 

Logeq1

Logeq2

 

Just as in linear regression, we define a cost function with an optional regularization term. The cost function for logistic regression is different from that of linear regression, but it still maintains the property that 0 is perfect, and higher is worse:

Logcost

Interestingly, with the cost defined like this, the gradient is the same as in linear regression:

Eq2 2

Using gradient descent as usual (or, in fact, any minimization algorithm) gives us a solution which is always a global optimum.

When looking at costs from a learning curve perspective, use the logistic regression cost without the regularization term.

If there is more than one class, that is, the problem is one of multiclass classification, then we can simply train one set of parameters per class. Then, when we evaluate a data point, we feed its features into all classifiers, which gives us the probabilities that the point is in each class. We can then simply choose the class with the highest probability.

The next post will unify the linear and logistic regression cost functions so that we can see they fall out from the same considerations.

Machine Learning: Overfitting, underfitting

It's not enough for a machine learning algorithm to optimize its cost on your data set. If your algorithm works well with points in your data set, but not on new points, then the algorithm overfit the data set. And if your algorithm works poorly even with points in your data set, then the algorithm underfit the data set.

Underfitting is easy to check as long as you know what the cost function measures. The definition of the cost function in linear regression is half the mean squared error. That is, if the mean error for each point is z, then the cost will be 0.5z2. So if your output ranges from, say, 100 to 1000, and your cost is 1, then the mean error would be 1.4, which represents a mean error of anywhere from 1.4% to 0.14%, and that may be good enough. If, however, your cost is 50, then the mean error would be 10, which is anywhere from 10% to 1%, which is probably bad.

If you cost ends up high even after many iterations, then chances are you have an underfitting problem. Or maybe your learning algorithm is just not good for the problem.

Underfitting is also known as high bias, since it means your algorithm has such a strong bias towards its hypothesis, that it does not fit the data well. It also means that the hypothesis space the learning algorithm explores is too small to properly represent the data.

Checking for overfitting is also fairly easy. Split the data set so that 80% of it is your training set and 20% is a cross-validation set. Train on the training set, then measure the cost on the cross-validation set. If the cross-validation cost is much higher than the training cost, then chances are you have an overfitting problem.

Overfitting is also known as high variance, since it means that the hypothesis space the learning algorithm explores is too large to properly constrain the possible hypotheses.

 

Dealing with overfitting

  • Throw features away. The hypothesis space is too large, and perhaps some features are faking the learning algorithm out. Throwing features away shrinks the hypothesis space.
  • Add regularization if there are many features. Regularization forces the magnitudes of the parameters to be smaller, thus shrinking the hypothesis space. It works like this:
First, add a new term to the cost function which penalizes the magnitudes of the parameters (except for θ0, which corresponds to the faked x0 feature):
 
Eq2 1
 
Again, note that the summation over the squared parameters starts at 1, not 0. λ is a parameter which adjusts the penalization, which means the size of the hypothesis space. Small values increase the hypothesis space, while larger values shrink the hypothesis space. Of course, too large a value may lead to too small a hypothesis space, which leads to underfitting.
 
We can start with λ=1 and then increase or decrease logarithmically, measuring the training and cross-validation cost each time. However, when measuring, use the definition of the training cost without regularization. This is because you just want to see the mean squared error, and the cost contributed by the parameters isn't an error.
 
Now, we need the gradients with respect to each parameter. This is simply:
 
Neweqgrad2
 
 
And now the learning algorithm uses this gradient instead.
 

Dealing with underfitting

  • More data will not generally help. It will, in fact, likely increase the training error.
  • However, more features can help, because that expands the hypothesis space. This includes making new features from existing features.
  • More parameters can also help expand the hypothesis space. For linear regression, the number of parameters equals the number of features, but for other learning algorithms, the number of parameters can be greater.

 

Training curves and metaparameter selection

What values of metaparameters (regularization penalty λ, number of features, number of parameters, number of data points) is good? Perform various runs where you fix all metaparameters except for one, and then plot out the training and cross-validation costs versus the metaparameter you are tuning, and choose the value of the metaparameter that minimizes the cross-validation cost.

However, when reporting the error rate or cost metric for your chosen metaparameters, you will have to set aside some portion of the data which was touched neither by the learning algorithm (during training) nor by you (during metaparameter selection). Thinking about it differently, the learning algorithm performs gradient descent based on the values of the metaparameters, and then you perform gradient descent on the metaparameters themselves. That means you have to set aside some portion of the data as cross-validation!

So we end up with three subsets of data, a training set for the learning algorithm, a cross-validation set for metaparameter selection, and a test set for final results. Generally this can be split 80%, 10%, 10%, or 60%, 20%, 20% if there is enough data.

Machine Learning: Linear Regression

I took Stanford's online Machine Learning class taught by Andrew Ng, director of SAIL, and one of the coauthors of the paper resulting from the recent Google "Cortex" project. While I'm fairly up-to-date with evolutionary computing, I felt that I could use a refresher in non-evolutionary techniques, and I'm really glad I did! Andrew's lectures were well-organized and highly intelligible, and the areas where I felt he went too slowly can be forgiven since his audience is undergrads. The videos can be played at varying speeds, and I alternated between 125% and 150%.

I'll post a series of articles on each subject that was covered, more for my own reference, but perhaps readers might find these useful as well.

Linear Regression

Suppose we did a study where each record (or point) in the data set consists of measurements for a number of different features. For example, the features in a medical study could be the numerical results of blood tests, and the features in a botanical study could be the numerical results of the sizes of various plant parts. The features in an economics study could be economic indices. In other words, each record is multivariate.

What we want to do is call one of the features the output, and call the rest of the features the inputs, and find out if there is a way to predict the output given the inputs. For example, is there a way to predict cholesterol level given blood sugar level, vitamin D levels, and white blood cell count? Specifically, if we call the inputs x1, x2, ..., xN, and the output y, then we want to find some function h ("h" stands for hypothesis) such that y ≈ h(x1, x2, ..., xN).

With linear regression, we further state that h is a linear combination of the inputs:

Eq1

The thetas are the unknown coefficients of each input — that is, they are the parameters of the function — and the goal of linear regression is to find the thetas which, considered over the data set, makes hθ ≈ y. We determine how well the function, given a particular set of parameters, fits the data set by using a cost function. The higher the cost, the worse the fit. Ideally, we would like to see a zero cost, meaning a perfect fit, and there's no such thing as a negative cost.

Eq2

m is the number of points in the data set, x(i) is the set of inputs for the ith data point, and y(i) is the output for the ith data point, so i goes from 1 to m. All we're doing here is taking the difference between the predicted output and the actual output, squaring it (so that it doesn't matter what direction the error is in, and zero means a perfect match), and then taking the average over all points in the data set. There's also a division by two, which makes the later step a little cleaner.

Scaling the cost by a constant, such as 1/2, doesn't matter, because we're still maintaining the property that higher costs mean worse fits, and a zero cost is a perfect fit.

What we want to do is find a set of parameters θ so that Jθ is minimized. We will probably not be able to get the cost down to zero, but we will get as close as possible.

While it is possible to calculate the parameters directly from the data set under most circumstances using an equation known as the normal equation, this isn't nearly as interesting as searching for the solution, and when the hypothesis function gets any more complicated than a linear function, there is in general no direct solution, and the parameters must be searched for.

The way we will search for the solution is to randomly choose the parameters, see what the cost is, and then move the parameters in a downhill direction. This is called gradient descent. The first thing we do is compute the gradient of the cost function with respect to each parameter. Actually, the thing we do before the first thing is to fake up an input variable which is always 1:

Eq3

Here, we faked up an input feature x0 which is always 1. This makes the math more regular and work out more cleanly. Now we can take the gradient:

Neweqgrad

To change the parameters so that the cost moves downwards, we choose some rate α, multiply each gradient by that, and move each parameter in the downhill direction:

Eq5

We can then calculate the new gradient using the new parameters, move again, and repeat until we're not reducing the cost any more.

Practical considerations

  • Because of the simple nature of the cost function, there are no local optima. There is only one global optimum.
  • Each feature should be scaled so that it is approximately in the range -1 to +1. It doesn't matter so much if the feature is only positive or only negative, or even if the feature goes a little bit outside this range (although -3 to +3 is a strict limit). This is so that the gradient doesn't move one parameter by a lot but the rest of the parameters by only a little just because the range of that parameter is large relative to the others.
  • You want to choose α to be as large as possible without overshooting and kicking the parameters violently uphill. Monitor the cost function, and if it increases, your rate is too high. In general, start with a low rate such as 0.03, and logarithmically increase the rate (i.e. 0.03, 0.1, 0.3, 1, and so on) if it seems that the cost is going down too slowly, or decrease the rate (i.e. 0.03, 0.01, 0.003, and so on) if the cost increases.
  • Randomly set the theta parameters to be in the range -1 to +1. Or, they can even be all zero. But it's good to get in the habit of going with a random starting point.
  • You can create more features by combining existing features by, for example, multiplication, division or squaring. Examine the data set manually to see if you can spot anything which appears nonlinear, and attempt to linearize the nonlinearity. For example, if one feature seems to be following a square law, then square root the feature before running linear regression.
  • Underfitting and overfitting will be handled in a later article.

The Cloud sucks as a filing cabinet

Help, I'm drowning in paper! My filing cabinets are overflowing! I need to store these in The Cloud, in one place, so that I don't have to go to a dozen different websites to find my documents, securely, so that I can't have my identity stolen, and organized so that I can quickly find the documents I'm looking for.

Great, there's Amazon's S3, Google's Cloud Storage, Apple's iCloud, DropBox, SpiderOak, EverNote or any of a number of other players. I can certainly upload documents to, and download documents from, these services. But they're all a bit... off. Here's what I mean.

Folders

Ideally, a filing cabinet has some system of organization which lets you find files fast. Physically, each document goes into a single drawer labelled X, and a folder in that drawer labelled Y. That's pretty much all you get. Digitally, you should at least be able to assign tags to a file, such as "pets", "bank", "2012", "insurance" and then search on multiple tags. Digital files also let you search inside, in case the tags are nonexistent or not descriptive enough.

Which of the above-mentioned services allow tags? EverNote. That's it. The rest give you a filing cabinet and you sort of ball up your papers and toss them inside. You find a document by uncrumpling each document and looking at it to see if it's the one you want.

Security

Lots of companies claim that they store your files securely. DropBox has proven that you can't trust any of these guys. The only real solution is to encrypt your files on your own computer, and decrypt them on your own computer. The cloud should not store your key.

Which of the above-mentioned services really have that level of security? SpiderOak. That's it. All the others might say that they have security, but all the maintenance guys have copies of the key to your filing cabinet. If a company maintains the keys for you, they can decrypt your files. Security breaches happen all the time, and for mainly stupid, monetary reasons.

No Intersection

None of these apps combine tagging with security. Why? There's clearly a demand for online storage of documents, but why isn't there any demand for secure, organizable online storage?

Risk Analysis

I think most people have done a back-of-the-envelope risk analysis and have concluded that they don't care if their files are stolen. Sure, they'll scream if someone intercepts their mail and looks at their credit card statements, but it's much easier for an employee of a company to unethically snoop through users' files, and it's much easier for companies to neglect security because it costs too much to implement, and nobody will know anyway. Or for companies to leave the back door open because the government makes them do it. Maybe a company that doesn't store customers' keys can't deal with telling customers that their files are unrecoverable if they lose their key?

However, where does that leave those of us at the high end of the security bell-curve?

Someone needs to step up to the challenge and develop a user-friendly solution that combines tagging of files with user-controlled security so that we can have a truly secure online filing cabinet.

The N-gram Sherlock Holmes

Can I give @Horse_ebooks a run for his money? I wrote a quick program to take in much of the Sherlock Holmes canon and create letter-based N-grams from them. Here's an example of the output with N=10, which seems to strike the right balance between unintelligible and over-fidelity. So I present to you a new Sherlock Holmes short story. I only added the title, some close-quotes and paragraph breaks, and the final "THE END".

With a Cold Sneer upon the Table

  "Crowder, the game-keeper in the temporary office, and Holmes had high hopes, while her pursuer dogged her some little use," he remarked, "if it is indeed, a stranger, that nobody can find this witness for you, perhaps, but at last he broke out at me, spitting and cursing, with a cold sneer upon the table."

 

The blundering against it, he put all his weight upon it.

 

Suspicion of treachery never for an instant but a glance of a face in a window, but in spite of my ignorance of the room. Very good, Brother Morris, we'll have the truth. "No doubt it's just what his employer of the woman Douglas. I saw no necessity to disturb the housekeeper's room."

 

"It looks like one of the mangled body, overwhelmed with grief and despair in his eyes were wild and staring!" An agitated water. He seemed himself during my checkered career, but never anyone in the room was laid down his room. Godfrey read it, and feeling once more. I am not sure that the game in their places.

 

My lens discloses more than the last. "I should wish to know."

 

"Well, was there yesterday that I spoke to her of the beautiful of women, and then to my amazement at the last occasion. He had had nothing save what I have it in me to make anything else?"

 

"'You have carte blanche."

 

"Absolutely?"

 

"I tell you, however, another mouthful of dinner before they were bosom friend of yours, and I don't know the day that I cannot help you much. His only accomplishments, sir, may be less than five feet high and impenetrable to light, she stood blinking out that he had got a trap to take her on her own terms, and I have some strong influence on her destiny and that on your arrival had left us," but he said nothing but a curse yet upon the brass salver.

 

"A young lady has arrived at the tops with rich brown fur, completed his arrangements; but at the trial. Good-night!"

 

"Good-night, Councillor McGinty, you may not be. But I know nothing, so that I can't put two words together with an apology," he said.

 

THE END

Logical Engine function casing

This is, I think, the final design for the function casing for the Logical Engine. I took 1/2" acrylic sheet and fed it to the ShopBot, making a series of sections, each of which can be lifted entirely up and out of the casing if they need to be repaired. An additional guide screws on the outside which further aligns the rods.

This idea was sparked by a member of the DIY Book Scanner forum, Charles Morrill.

IMG 0251

 

 

IMG 0255Closeup of the alignment system.

The horizontal slots were easy to make with the ShopBot, but those slots would allow the rods to shift horizontally. The outside guides simultaneously restrict their horizontal movement and precisely vertically position each horizontal segment to be one inch apart (plus or minus a few mils).

IMG 0254

The rods still need to be significantly straight. I found that the steel has a simple bow to it that can be corrected by applying a force to bow the steel in the other direction. They don't appear to be randomly twisted or bent, so hopefully I wouldn't have to go the route of destressing with heat.