First Weekend Success

After the first weekend of basketball, our Machine Learning predicting of the lower seed winning has results.

We had two measures of success: We wanted 47% of the picks we made we wanted a lower seed winning and we wanted to see the $ value of virtually betting $100 on each game. By both measures, we had success: We correctly picked 6 upsets out of the 13 games we chose (46%) and we had a virtual profit of $59 on $1300 or 5% ROI.

Overall there were 10 instances where the lower seed won in the first two rounds. This year is on track for fewer lower seeds winning (22%) than the historic rate (26%). So even with “tough headwinds” we still met our expectations.*

Someone asked me over the weekend about one of the upsets, “how come you didn’t have Middle Tennessee?” The answer is simple, it didn’t fit the criteria we had. Games that matched our criteria are the largest historic collection of the lower seed winning. Lower seeds may have different criteria and still have a possibility of winning, our criteria is simply the most predictive of a lower seed winning.

Besides some really, really close calls, we had several games where we had that the lower seed having a good chance of winning and they simply lost. Our play was to choose games that match the criteria and spread the risk over several probable winners. This wasn’t about picking the only upsets or all of the upsets, this was about picking a set of games that had as the highest probability of the lower seed winning. And by our measures of success, we achieved our goal.

The Machine Learning algorithm did as we expected: It identified a set of characteristics from historic data that was predictive of future results. The implications for any business is clear: if you have historic data and you leverage this type of expertise, you can predict the future.

For the next round, we have 5 games that match our criteria:
Wisconsin over Florida
South Carolina over Baylor
Xavier over Arizona
Purdue over Kansas
Butler over North Carolina

If any games match our predictive criteria in the next round, we’ll post them Saturday before tip off.

If you want to see how this can relate to your business contact Gordon Summers of Cabri Group (Gordon.Summers@CabriGroup.com) or Nate Watson of CAN (nate@canworksmart.com).
* Don’t even start with the 1% difference between actual of 46% and the target of 47%