The problem in this case is that the sensors are not truly independent, because of sideslip we cannot directly obtain an alpha value from a single sensor but only from the average of the two values. Apologies, that was my meaning when I said that the answer is always the average.
So this is a scenario of having to calculate the statistic in circumstances where a sensor may be giving erroneous data and trying to minimise the effect, versus pick the faulty sensor and disregard it. If you decide not to then you are deciding to disregard sideslip, which may be valid but I'd assumed here (for sake of discussion) that one would wish to continue to compute 'true' alpha come what may.
From general statistics there are rules for the use of a geometric mean versus arithmetic. In short use it where the data is quite asymmetric,
e.g. crowded over the low values but still with some high values skewing the tail to the right. Davies test for skewness can be used to determine whether an arithmetic or geometric mean should be used. The relatively few large alpha spike values seen in the QF 72 incident as compared to the small magnitude 'true' alpha values seems to me to fit those rules, hence my original comment. The geometric mean would fit even better if we had more than one round of sensor values.
Of course if the data is quite symmetric then arithmetic is the better statistic, so to look at if from another perspective having selected the arithmetic mean you're saying something about what you expect or assume the population of values to be. As long as that assumption is correct then the arithmetic is optimal, as soon as high value noise is introduced (skewing the distribution) the geometric mean is better.
Another point is that a population statistic can 'deal' with random noise, a bias error (for example a sensor hard fails to zero or max) is not something that any statistic can address. So you still need to handle these sort of biasing failures through other means.
Thinking about your example of a true high alpha (say flaring to land) with a false low sensor then yes the geometric mean would produce a lower resultant alpha. Example (15, 1) AM = 8, GM = 3.9. So geometric is comparatively worse and that's an inherent function of the statistic, i.e. the GM will always be less than or equal to the AM. So yes implicitly I was thinking about spiky high value errors rather than low value errors, given the original context.
I agree there's definitely a computational overhead, you get nothing for nothing. But (for example) we have been doing fixed point square root calculations for a while so this is not ground breaking.
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Friday, 23 December 2011 at 8:57 PM, Palin, Stuart (UK) wrote:
> > Matthew Squair
> > Sent: 23 December 2011 01:16
> > Stuart I'll try and answer your questions as follows.
> > > " two disagreeing, but equally believable, sources.."
> > But are they? Have a look at the published literature on
> > Airbus voting algorithms (mostly in the context of unreliable
> > airspeed).
> I agree that there is literature published and there might be other ways
> of determining which sensor was the most likely to be correct - but in
> that case why not simply reject the one you believe to be incorrect and
> go with the one most likely to be valid; it is not as though the mean
> (of any form) is any more valid in terms of being an accurate
> determinism of what is actually happening. Peter Ladkin in his post of
> 23-Dec-2011 07:03 provides references and further details of
> complexities involved in dealing with sensor data. However, I do not
> intend to comment on the Airbus case in particular - I was more
> concerned with the simplified claim that GM is better than AM and was
> seeking to set the pre-conditions for considering this claim.
> > > "what evidence is there that Geometric Mean is 'better' than
> > > 'Arithmetic Mean"
> > >
> > I'm tempted to say 'the maths is the maths', but (case in
> > point) if QF 72 had used a GM would have reduced the severity
> > of the excursion, no?
> Possibly, but this is choosing the result you would prefer to have then
> selecting an algorithm that gets you closer to it. Simply looking at
> the two values (in the absence of any other indications) why should GM
> be considered any better than AM?
> If there is other reasoning that can be made about the data then it may
> be possible to construct an argument for one form over the other.
> However, this argument is not made in the reference you cite - which is
> why I am not convinced of the claim. (Note this is not saying whether
> the claim is right or wrong, simply that the argument for it is
> > More subtly, any average is a population statistic. Once
> > you've elected to use an pop.stat the next logical question
> > is how (if at all) do you address the problem of a large
> > value dominating the statistic and in effect 'devaluing' the
> > smaller value. If you have three values you can apply voting,
> > if you have two values then GM is less sensitive to that
> > case. The difference is the difference between hazard
> > reduction and hazard elimination.
> Or vice-versa, how do you address the problem of a low value dominating
> the statistic? Again, note that the starting point for this is the
> 1-v-1 case, where more data is available there are other possibilities
> other than relying on simple statistics. (Even in the 1-v-1 case in
> critical systems it is typical to look for additional information - such
> as the historical behaviour of the data, or reasonableness).
> > "...what if the answer is meant to be 64.0?"
> > Well, by definition the answer is neither 2.0 or 64.0 it's
> > supposed to be the average.
> Maybe I did not word my proposition carefully enough. If we are in a
> situation where we have two disagreeing values which purport to be a
> representation of what is happening in the "real world". I was asking
> you to consider the case where the situation in the "real world" is
> better described by "64.0" because in this case GM appears to fare worse
> than AM.
> > So what we're talking about here is minimising the averages
> > value vulnerability to unexpected major (hopefully temporary)
> > major excursions in a value.
> I agree that is desirable [though I would choose to talk in terms of a
> 'consolidated' value rather than average - but that is possible my
> background coming through] - but I think you are falling into a mindset
> of considering only 'large values' as the 'major excursion'; what if the
> small value is the major excursion?
> > "What if your inputs are -2.0 and +2.0?"
> > Good point, well made. I'm sure that there are practical
> > complications like that, but (hand waving moment) I'm pretty
> > sure they're not insoluble either.
> Typically at the cost of additional complexity and computing resource;
> even doing a square-root adds considerably to design complexity if you
> are using fixed-point arithmetic. And where you are also trying to
> deliver the results in real-time (and I am talking of meeting aircraft
> control-law phase margin sorts of times here) whilst also doing all the
> other computations needed there needs to be a well-defined benefit to
> justify this cost. So far I don't think an adequate argument has been
> made. [I have not had time to read the report that PBL has cited - but
> given John Rushby's previous work in this area I expect that there will
> be a strong supporting argument for any proposed complexity in the
> algorithms used].
> > "..a post-incident knowledge of what the AoA was.."
> > Well yes, based on those parameters GM would have performed
> > better, which was the point of raising the argument in the
> > first instance. I don't see this as cherry picking rather
> > using real data to compare the performance of two alternative
> > schemes.
> I would disagree - this is a specific instance of real-data, but the
> argument presented does not consider what the wider range of
> possibilities might be.
> > " (two?) erroneous low values"
> > True, in which case the values would agree and GM would
> > achieve nothing, but then you'd also be vulnerable with an AM as
> I am unclear as to what values might "agree", I was proposing a case
> where an aircraft might be in a situation where AoA could be higher than
> normal and one of the sensors fails in a similar way to Q72 except that
> the spikes inject abnormally low values. In this case GM would seem to
> give a 'lower' consolidated AoA than AM and this would cause a more
> severe pitch down event.
> [In retrospect bringing the landing phase into it was probably not a
> good idea as any sort of pitch-excursion is probably a very bad thing in
> this part of the flight - and this could become a distraction for this
> particular argument]
> > I hope that helps, or sparks more debate :-)
> Happy to make a Christmas wish come true ;-)
> [PBL - Thank you for the reference to the John Rushby/SRI paper - I look
> forward to reading it.]
> Stuart Palin
> Please note, although this message has been sent from a BAE Systems
> email account, it is a personal message and any opinion expressed is
> similarly personal. The following information is as required by UK law.
> BAE Systems (Operations) Limited
> Registered Office: Warwick House, PO Box 87, Farnborough Aerospace
> Centre, Farnborough, Hants, GU14 6YU, UK Registered in England & Wales
> No: 1996687
> This email and any attachments are confidential to the intended
> recipient and may also be privileged. If you are not the intended
> recipient please delete it from your system and notify the sender.
> You should not copy it or use it for any purpose nor disclose or
> distribute its contents to any other person.
[The content of this part has been removed by the mailing list software]
Received on Sat 24 Dec 2011 - 01:55:18 GMT