Driver Test Results: May 2012
2009 Testing Methodology for Drivers
The 2009 GolfTest USA Driver Test consisted of 23 drivers from 15 Club companies. They are Titleist, Nike, Ping, Mizuno, Bobby Jones, Hireko, Warrior, Natural Golf, Infiniti, Simmons, Cleveland, Srixon, Wilson, Pinemeadow and Nickent. Feel Golf submitted only two drivers which weren’t enough for us to complete the test for them. We usually need five or six drivers in each model so we can properly fit each tester with the right flex and loft. Thus we were not able to include the Feel driver in the final test results.
There were several companies that wanted to participate but they did not have their new models available at test time. We have agreed to hold a supplemental test in November for those companies so we can get the results posted in time for the 2010 golf season.
Some companies said they would participate and then we never received the drivers. They were: Bob Burns, SMT, Krank, Nicklaus, Yonex, Founders Club, Hippo, Orlimar, Geek, Magique, Alpha.
Companies that said they would consider participating but then never responded back are: Honma, Cobra, Vulcan, Bang, MG, Fox, Louisville, PowerBilt.
Companies that declined our invitation to participate are as follows: TaylorMade, Tour Edge, Callaway, Monark, Innovex.
Companies that never responded to our phone calls and emails are as follows: Adams, Nakashima.
We would like to ask all golfers who want to see any of these companies that didn’t participate in our driver test or other club tests to please contact these club companies and let them know that you want them to participate in upcoming tests.
The 2009 GolfTest USA Driver Test has been refined and modified to produce more accurate results that golfers can use to assist them in determining which new drivers would be best for them to consider when purchasing a driver. We use the FlightScope Kudu launch monitor when we conduct proprietary testing for Golf Club companies. It is very time consuming but using the Kudu helps us to produce objective results that have a very low margin of error of less than 3%. By having such a low margin of error we can assure the Golf Club companies that the results we provide them are accurate, verifiable, credible and repeatable. Because we use real golfers in the testing we conduct, both Golf Club companies and golfers alike can be confident that the test results produced are an accurate indication of the performance of any particular golf club.
Thus we decided to use the Kudu in our 2009 Driver Test. These test results are in the public domain and available to anyone. Proprietary testing that we conduct for golf companies is the property of the golf companies that retain our services. It is their prerogative if they chose to disseminate any of the results obtained from the test. GolfTest USA is paid a fee by the golf companies to conduct proprietary testing. Because we used the Kudu launch monitor it took us twice as long to conduct the test. We are very pleased with the results because the Kudu is capable of calculating the Smash Factor of a driver. The Smash Factor is a key indicator of the performance of a driver. It is the ratio of club head speed versus the ball speed as it comes off the face of the club. There are many ingredients that affect the Smash Factor. Each tester swings a golf club differently from other testers so that can affect the results. Each driver has its own characteristics, such as weight distribution, MOI, shaft quality, shaft kick point. Taking all the variables into consideration we have found that at least 500 shots need to be used to compute an Average Smash Factor for any given driver to insure a margin of error of less than 3%. In this test we have accomplished that and we are listing the Smash Factor for each driver in the test.
The following information is an explanation of the testing methodology we used in conducting the 2009 Comprehensive Driver Test.
60 golfers of varying skill levels will hit a group of 5 to 7 driver models. (We never have a tester hit more than about 60 shots in a testing session).
If a driver is designed for lower handicap golfers than the testing group will be made up of golfers with handicaps of 12 or less. If a driver is more of a game improvement club than the group will be made up of golfers whose handicaps are above 12. Some drivers are made to help golfers who tend to slice or have difficulty controlling their drives and therefore we look for golfers we think will benefit from these types of drivers. We have a tester hit a few shots with a driver until they feel comfortable swinging it. Then we ask them to hit 10 shots keeping in mind that they will be asked to evaluate how they feel they performed hitting that driver. Once they have hit the shots we have them come over to a test administrator where they are asked to rate on a scale of 1 to 10 the ten criteria. We ask them to be as fair and unbiased as they can be in their evaluation. Once they have evaluated driver #1 they move on to driver #2 and so on. They are told not to compare one driver to another but only to rate how they performed with each driver. While they are hitting each driver the test administrator is using the Kudu to rate how they perform in distance, control, accuracy and trajectory. A rating factor is established for each tester that is then used at the end of the test and which supplements their ratings. This helps to insure that the tester is being fair and unbiased and consistent in their rating from driver to driver. Out of the 60 golfer who test each driver we then pick the 50 evaluations that we feel were done in the fairest and most consistent manner and use them to come up with the average rating in each criteria for each driver. We have found that this methodology gives a fair and accurate representation of how each driver performed in the test. We still observe that the “well known” clubs tend to get slightly higher ratings then their actual performance while lesser know clubs don’t get quite the credit they deserve in the testing. By using the rating factor which is added based on the actual results we are able to level the playing field and “smooth” out the results and give a more accurate representation of how each driver really performed in the test. By averaging the results of 50 testers we are confident that the results reflect how golfers really performed with each driver. Because the ratings are averages the results tend to group together in a small range usually from 7.8 to 8.6 (but sometimes fall outside a standard bell curve). Even a rating point of .1 better can indicate that one driver performed better than another one. A driver whose rating in “Distance” was 8.3 could be considered to be longer than a driver whose rating was 8.2. (That might only equate to 1 or 2 yards longer on average) We have yet to test a driver that hits a ball on average ten yards longer than any other driver. In the “Accuracy” criteria the average dispersion is 11 yards measured from a center line 200 yards from the tee. So a driver whose average is 10 yards is 10% better than the average and clearly superior to the average driver. That 10% might only translate into a .1 better rating overall. But that is still a significant improvement for a golfer looking for a driver that hits the ball straighter. Having said all that, this is still not rocket science. We do feel however that this testing format is one way that can be used by golfers to help determine which golf clubs would be best for them. It is a very crowded field of golf equipment out there and golfers need all the help they can get from an independent and unbiased source. We feel that GolfTest USA helps to provide the information they need to make an informed purchasing decision.
On a scale of 1 to 10 (with 10 being best) they rated each of these criteria.
Distance: Each tester rated how far they hit each driver.
Control: Testers rated how much control they felt they had while hitting each driver.
Accuracy: Testers rated how accurate they felt each driver was.
Forgiveness: Testers rated how forgiving the driver was on off center hits.
Sound: Testers rated how did they liked the sound of each club.
Appearance: Testers rated how much they liked the appearance of the club.
Feel: Testers rated the feel of the club at set up, during the swing and at impact.
Ball flight (Trajectory): Testers rated how they liked the trajectory of the ball off the club?
Recommend to others:The testers were asked to rate how strongly they would recommend each driver to their friends or other golfers.
Overall Rating: The testers were asked to take into consideration what is important to them in a driver and give it an overall rating. (This is not an average of the other criteria).
Overall Average: A final category which is the average of the 10 criteria rated by the testers. We feel this average helps to give golfers an indication of how each club performed taking into account all of the criteria in the test.
Why are the test results the way they are? Click here to find out!
Let us know if there are other drivers you would like us to test and we will contact the manufacturers and see if they will participate in our next testing cycle. firstname.lastname@example.org.
Based on the testing completed to date we have determined that any club that has an “Average” score 8.1 or better based on our scoring system is awarded the GolfTest USA “Seal of Excellence” award. Being awarded the GolfTest USA “Seal of Excellence” indicates a golf product has been tested and reviewed by our staff of golf testers and has been judged to be of superior quality, value and performance. Golfers who purchase any of these clubs can feel confident they will be obtaining a quality golf product.
We would like to make some observations regarding the results of the test. The Titleist 909 D2, Ping G10 and Cleveland Launcher had the highest overall average in the test. The Titleist 909 D2 had the highest Smash Factor with the Ping G10, Simmons Liberator, Titleist 909 D3, Nike SQ DYMO Stra8-Fit and Cleveland Launcher close behind.
For “Distance” the Titleist 909 D2 was tops with Ping G10. Titleist 909 COMP, Nike SQ DYMO Stra8-Fit, Simmons Liberator and Cleveland Launcher very close to the top.
The drivers that had the best “Control” were the Ping G10 and the Titleist 909 D2.
The highest rated driver for “Accuracy” was the Nike SQ DYMO 2 Stra8-Fit with the following drivers all close behind: Ping G10, Mizuno MX 700, Titleist 909 D2, Simmons Liberator, Cleveland HiBore Monster XLS, Bobby Jones Workshop Edition, and Cleveland Launcher.
The best driver in the “Forgiveness” criterion was the Bobby Jones Workshop Edition. Others that rated high were Cleveland Launcher, Nike SQ DYMO Stra8-Fit Mizuno MX 700, Nike SQ DYMO 2 Stra8-Fit,
The drivers that rated highest for “Sound” were the Bobby Jones Workshop Edition, Cleveland Launcher, Nike SQ DYMO Stra8-Fit, Ping G10 and Titleist 909 D2.
The highest rated drivers for “Appearance” were the following: Ping G10, Titleist 909 D2, and Cleveland Launcher.
The top rated driver for “Feel” was the Titleist 909 D2. Also highly rated were the Bobby Jones Workshop Edition, Titleist 909 D3, Hireko Caiman Power Play, Cleveland HiBore Monster XLS, Nike SQ DYMO Stra8-Fit, and Cleveland Launcher.
The Ping G10 and Titleist 909 D2 were tops in “Ball Flight”.
The drivers that were the highest “Recommended” were the Titleist 909 D2 and the Ping G10. Other drivers that were strongly “Recommended” were the Nike SQ DYMO Stra8-Fit, Cleveland Launcher and the Titleist 909 D3.
The driver that rated the highest “Overall” was the Titleist 909 D2. Closely following was the Ping G10, Cleveland Launcher, Titleist 909 D COMP and the Titleist 909 D3.
From the test results it is clear to draw the conclusion that Titleist, Ping and Cleveland make drivers that offer great performance. That is not reason enough however to discount the other drivers in the test that should be given due consideration when making that important decision as to which driver would be best for you. All of the drivers in the 2009 Driver Test met our requirements to be awarded the GolfTest USA “Seal of Excellence”.
Click on each column head to sort ratings by that criteria.