Thursday 14 February 2013

JDPower Quality Ranking and Statistical significance

...is JDPower more reliable than Republican pollsters?...
JDPower's latest quality survey reports a gradual (but over time substantial) improvement in vehicles sold in the US, averaging 126 customer-reported defects per 100 vehicles, versus 216 for 2007. That recent vehicles are better than those of a decade ago matches casual observation of new and used cars. But JDPowers also reports numbers by brand, resulting in a ranking widely reported by the media and (when the rank is high) echoed in advertising. (See for example the Feb 13 story in Automotive News.
Is it really credible that cars have improved that much in a scant 6 years? Or has the nature of reported problems shifted over time? Even more glaring is that Toyota rates 58% worse than Lexus. For that matter, is Honda at 119 defects per car really better than Acura at 120, or for that matter Chevrolet at 125? OK, my phrasing is unfair, they're in the ballpark, but is JD Power more reliable than Republican pollsters?
Nowhere do we find journalists reporting the estimated margin of error, surely critical to understanding whether the ranking is meaningful. Nor do they question JD Power about potential biases in the survey. Are those who respond representative of the average owner or there is sample bias? Is the difference between Lexus and Toyota a dealership effect? -- surely luxury dealers pamper their customers, fixing little problems without charge during routine service visits. Do survey respondents view something as a problem if they don't have to go out of their way to resolve it? Then there's confirmation bias: owners of a Lexus are bombarded with the quality story, and want to believe they spent well. Such owners adjust their perception of reality accordingly, biasing survey responses.
I have to assume that the staff at JDPower understand these issues. They may have a sense of confirmation bias, though that's a hard nut to crack. Ditto dealership effects. But they surely estimate their margins of error, and compare respondents and non-respondents to gauge selection bias.
...political journalists ask hard questions about survey results...
The rank-order method of reporting is part of the JD Power's business model. It lends itself to the annual spate of news stories and (for some OEMs) tags in their advertising. Without widespread media coverage, their surveys would be worth much less money to manufacturers. Given that, journalists ought to press JD Power for more detail and not serve up puff pieces. Political journalists can ask hard questions of survey results. Can't business journalists do the same?!
...mike smitka...
This graph illustrates JD Power rankings with a 3% margin of error. I made Ford the center; the lightly shaded box includes all brands indistinguishable at that margin of error. The black line represents the average of 126 defects per 100 vehicles. I deleted the highest and lowest ranking brands for visual clarity.
JD Power Quality Rankings (select)
Brands Statically Identical to Ford at 3% Confidence Interval