Or skip to the results

Cody Steele presented a B division Research and Development report at NARAM 46 that caught my attention. When I saw the title of "All-Fire/No-Fire Tests on Estes Igniters Using the Neyer D-Optimal Method" I knew that I had to attend the oral presentation. The reason is that I had previously run across Dr. Neyers work and in particular his paper on the D-Optimal method. There was also a better than average chance that I was the only one in the audience that had ever read this paper.

The oral presentation was a disappointment. I know that it was a B division report and that expectations should be lowered but it was still not very impressive. There were several problems.

The first problem was that Cody demonstrated a profound misunderstanding of how current limited power supplies work. He tested the igniters at 6V, 9V, and 12V and even went so far as to use different launch controllers for each voltage.

The fallacy here is that the voltage will only matter if the resistance in the circuit is large enough so that this voltage is reached on the power supply output before the current limit. Since the resistance of the igniter was about 1 ohm and the wiring added not much more, at the test currents of less than 2 amps the power supply always went into current limit before reaching even 6 volts output. So it was unsurprising that the results for each voltage were similar.

The next and biggest problem was that the test protocol stated was nowhere close to the protocol of the D-Optimal method. Cody's stated method was to apply a current, then increase the current and apply the higher current until the igniter ignited. Compare this to the flow chart in Figure 2. of Dr. Neyer's paper.

Another problem was that the computation of the sample mean and variance is not a simple thing. At the time of the oral presentation it had been a while since I read the report so I had forgotten the details but I knew that it was not straightforward. So during the Q&A portion of the presentation I asked Cody if he purchased or had received donated software from Dr. Neyer for conducting the test. The answer was "no". Later I asked Cody how he performed the calculation and he said that he used Microsoft Excel. I was unaware that Excel performed such an esoteric statistical calculation.

Finally he used the same pulse time for all tests. If he really was using the D-optimal method then he could have calculated the all fire and no fire currents from one set of tests. Instead he had separate data series for all fire and no fire. Worse, those data series are quite different where they should be the same.

So the idea of repeating the tests to confirm the results has been bouncing around for a while.

NARTS has provided a CD with all of the NARAM 46 reports on it and the written report confirms the problems of the oral presentation and includes a few more.

Cody cites one paper by Dr. Neyer and then proceeds to quote without attribution the only paragraph in the report describing the Neyer test. Aside from the plagiarism problem is the fact that this paragraph in no way describes how the test is performed or the data analyzed.

The report fails to mention various assumptions required to perform the test. Things like the probability distribution function used, starting bounds for the mean, and the standard variation guess. If Cody did in fact use the Neyer D-Optimal method it is impossible to tell from the report. It is also impossible to repeat his results. I refuse to believe that anyone who could read and understand Dr. Neyer's paper well enough to conduct the test could then produce so poor of a report that it looked as though the D-optimal method had not been used.

Preparing to repeat these tests breaks down into two areas: hardware and software.

**Test Hardware**

For the hardware I need a constant current source that I can adjust to the various stimulus levels required. In addition it would be very handy if it could provide a time limited pulse.

Since I didn't have a power supply with an adjustable current limit, I needed to buy or build something. Buying something seemed too expensive so I built it.

One of the big decisions is just how long to make the current pulse. Cody used a stated time of one second and I suspect this was measured by hand. Not very repeatable. In addition, in most cases if you are launching a rocket with a single motor you aren't too worried about how long the igniter takes to function just so long as it does. A one second duration is fine for that. But if you want to launch a cluster then if the igniter functions after even half a second the rocket is likely to be gone. So a shorter time is required.

After banging around some simulations in Rocksim I settled on a pulse width of 100 ms. For the simulations I created a rocket with a mass exactly equal to one motor and then determined how long it took it to travel 6" when loaded with the same motor type.

I also dug out an old DATAQ DI-194 data acquisition board. This will only do 240 samples a second but it will still provide a decent record of the current applied to the igniter. I hacked up some sample code provided by DATAQ to capture data from the tests and save it to disk.

Figuring out how the D-Optimal test works and how to do it myself has been a lot harder than building the current source.

The first step was figuring out how to compute the "Maximum Likelihood Estimate" or MLE. Because of the nature of the test, you can't simply average the currents of the igniters that fired.

The Likelihood is a function of the mean and variance (or its square root, the standard deviation, if you prefer) and the probability distribution function of the variable. I supose that there might be a few cases where the MLE has a closed form solution but in this case iteration is required. After a quick look at Numerical Recipes I decided to try out the GNU Scientific Library which has a variety of minimizers.

Equation 1 of Dr. Neyer's report is:

This is a nasty looking thing. A rough translation into something resembling English is: The Likelihood is a function of the mean and standard deviation and is given by the product over the samples of the combination of the number of trials (n) taken the number of successes (n * p) at a time, times the probability of success at this stimulus level ( P(z) ) raised to the power of the number of successes (n*p), times the probability of a failure ( Q(z) ) raised to the power of the number of failures (n * q).

Like I said, nasty. But as it turns out, it gets a lot simpler. In the course of running the tests, all of the stimulus levels will almost certainly be unique. This results in the value of 'n' always being 1. Also there will either be a single success or a single failure. The result is that the combination will either be 1 thing taken 1 at a time (which is 1) or 1 thing taken 0 at a time (which is also 1) so that drops out of the computation. the probabilities will be raised to powers that are either 1 or 0. Which makes that very easy as well.

One other complication, or perhaps simplification, that comes up is that we could be taking the product of lots of small numbers. For numerical reasons related to the finite precision of computer arithmetic, this is not a good thing. The normal workaround to this is to compute the log of the likelihood function. This has the effect of changing the product operator into a summation.

Now it is simple. For each sample, if it was a success, calculate the probability of success at that level and add its log to the total. If the sample is a failure, add the log of the probability of a failure at that stimulus level to the total.

That simplified nicely. Now there is the question of just what probability distribution function I should use.

Each individual igniter has some minimum current required to activate it. Because it is not possible to make every igniter absolutely identically, there is some variation in this level. Which is why you need to perform fancy tests. To make things worse, you can't just increase the applied current until the igniter functions because currents lower than that required to cause it to function will almost certainly alter its properties. Including the current required to make it function. So after each test you throw the test item away no matter the outcome.

Fear not. While the igniter is no longer useful for purposes of the test, it is still perfectly useful for igniting rocket motors. Even if it ignited during testing if it is an Estes igniter. It turns out that at the current levels being used the bridge wire almost never burns in two. The bare bridge wire without the pyrogen is nearly as good an igniter as one with the pyrogen. Just so long as it is touching the BP propellant, it will work. However, it would not be a good idea to use any of these igniters for clusters.

But I digress. The current required to activate any given igniter is subject to random variations in its construction. we don't know what these are nor do we much care. One of the interesting things about statistics is that if you combine several random processes into one, the combination tends towards a "normal" distribution. So we will assume that the igniter currents follow a normal distribution, with a few wrinkles.

When current is applied to the igniter for a set period of time, we are not so much applying a fixed current to the igniter as a fixed amount of energy. The temperature rise in the igniter and therefore the chance of igniting the pyrogen is a function of the energy applied. The power applied to the igniter can be computed by multiplying its resistance by the square of the current (I squared R). Power is simply energy per unit time, Joules per second in this case. Multiply by the time interval and you get the energy in Joules. The upshot of this is the chance of activating the igniter will vary with the square of the current. So the usual response is to use the log normal distribution.

Another wrinkle is that while each individual igniter's minimum current is randomly distributed and described by the log normal distribution, the chances of activating an igniter at any given current is the chance at that current plus the chances for all lower currents. So we have to use the log normal cumulative distribution function.

The code to compute this function in C is:

double probability( double x, double m, double s) { double z; z = (log(x)-m)/s; return( 0.5 * (1.0 + erf(z/M_SQRT2))); }

In getting ready to do these tests I performed a simple check of my equipment. Dr. Neyer provides a "crippleware" version of his software for free and it is limited to eight samples and that is what I did. I used a one second (no stopwatch involved) pulse width. This was just to get a feel for how the process worked. I used igniters pulled from packages of Apogee A3-6 motors purchased at a NARAM several years ago. For the cost of the igniters I also got uncertified motors. :-)

I started with a range on the average of 0.5 to 2.0 and the guess on standard deviation as 0.05. The data for this practice run was:

Stimulus Level (Amps) | Pass/Fail |

1.25 | F |

1.625 | P |

1.4375 | F |

1.53125 | P |

1.48438 | P |

1.39537 | P |

1.3045 | P |

1.12697 | F |

When I computed the sample mean and standard deviation using my code I got an average current of 1.34 amps with a standard deviation of 0.13 amps. the value of the log likelihood function at this point was -3.37. This data series is too short to be of much use other than seeing how the process works. I plan on a using somewhere between twenty and thirty samples for the actual tests.

One of the tricky things about the MLE is that you must have an overlap in the data to get unique results. In other words, the highest failure must be greater than the lowest success. Fortunately this data has a failure at 1.4375 amps and two successes at lower currents. This is yet another problem with Cody Steele's report. Three of the six sets of data have no overlap.

This particular item held me up for a while. I still don't really understand it but I managed to compute something. I located an old report (ARBRL-TR-02269) that discussed Fisher information and I finally realized my problem in understanding was due to the notation. The notation used in this document is different and it let me figure out how to compute it.

The basic idea is to find the next stimulus value. But the goal is to find one that minimizes the variance of the estimated parameters. The Fisher information enables that.

Alas I wasn't able to use the GSL minimizers on the Fisher information. The reason was that there were two local minima. An example is shown here:

The Fisher information has minima above and below the mean. Naturally the GSL minimizer will not find the global minimum reliably. So I resorted to a brute force search. The search region is limited to near the mean and the search resolution is limited by the stimulus resolution so the search is still pretty fast.

One problem with this computation is that the formula divides by p(1-p) where p is the probability. If a sample point is far from the mean then this can get really small. So small that a float or even along double cannot represent it and the result is zero. Dividing by zero causes trouble. My solution was simply to test to see if this value was really small and drop the sample point if it is.

With that last hurdle out of the way it was time to do some testing.

Igniter | Pulse Width | Mean | Std. Dev. | All-fire | No-fire |
---|---|---|---|---|---|

Estes Solar | 100msec | 3.85A | 0.089A | 4.13A | |

Quest Q2G2 | 100msec | 263mA | 20.4mA | 333mA | |

Quest Q2G2 | 5sec | 235mA | 44.0mA | 130mA |

Keep in mind that the all fire and no fire currents here are statistical estimates of the 99.9% probability. It is also dependent on my sample and it is likely that there will be variation from batch to batch. Hopefully not too much.