Psychology says it again: Trevor Knight is this year's sleeper at QB

One long standing, oft-replicated phenomenon in social science is that the better looking you are, the more advantages you have for success1. Politicians, CEOs, the popular girl in high school, and even people in the company you work for are all case studies for this finding. Interestingly, researchers have uncovered a similar phenomenon: the more you look like a leader, the more successful you'll be in a leadership role2. 

A study attesting to this idea comes from Sam Sommers' and Jon Wertheim book, This is Your Brain on Sports3. Wertheim and Sommers' study indicated that based just on anonymous headshots, people rated quarterbacks as stronger leaders than linebackers or wide receivers. Additionally, QBs that looked more like leaders were more successful than those that did not.

2016: Our raters lead the way
Last year, we replicated the study by Sommers and Wertheim using the 2016 NFL draft class as our pool of players. We asked a sample of 406 people to rate the perceived leadership qualities of the QBs in the draft class, judging them only by their headshots. We then ranked the draft class based on average leadership rating, and compared with the experts' ratings (writers at ESPN, Yahoo, CBS, NFL, PFF, etc.).

In short, as the table below shows, we found that Carson Wentz should have been the consensus #1 overall pick, and that Jared Goff was overrated. We also found that Dak Prescott was one of the underrated QBs in the draft. After their rookie season, here's how our study stacked up:

Note: QBR stands for quarterback rating, and is used as a performance rating as developed by ESPN3

2017: Warning on Watson, Mahomes
Time will tell whether our leadership rankings were accurate, especially given that some of the QBs in our study didn't play at all last year. But for now, it seems like this kind of study might be on to something, so with the 2017 NFL Draft coming up, we decided to do it again -- this time with the 2017 QB draft class.

The experts4 and the survey participants seem to be in some agreement this year: Davis Webb, C.J. Beathard, and Alex Torgersen appear to be locks at the 7th, 10th, and 11th QBs off the board, respectively. While there's no runaway first pick, there's a runaway sleeper in Trevor Knight -- a dual threat QB from Texas A&M. Sounds familiar. 

Deshaun Watson slipped into the overrated category, but we doubt that will stop anyone from taking him pretty early. Patrick Mahomes II is probably going to be this year's Jared Goff, our raters didn't like what they saw. Meanwhile, this year's Carson Wentz could be Brad Kaaya, who the experts think is a top 6 QB, and who our raters loved in terms of leadership.

Look, we're not saying this is a game changing way of evaluating QB prospects, but based on previous research, including our study from last year, this type of result might be worth a second look. But hey, according to redditor WALKER231, "the shittiness of [this study] was underrated." 

1 See: Dion, K., Bershcied, E. and Walster, E. (1972): What is beautiful is good. Journal of Personality and Social Psychology, 24, 285–290. Eagly, A. H., Ashmore, R. D., Makhijani, M. J. and Longo, L. C. (1991): What is beautiful is good, but...: A Meta-analysis of research on the physical attractiveness stereotype. Psychological Bulletin, 110, 109–128. Nisbett, R. and Wilson, T. D. (1977): The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35, 250–256. Brewer G., Archer J. (2007). What do people infer from facial attractiveness? Journal of Evolutionary Psychology, 5, 39-49 Hamermesh D. S., Biddle J. E. (1994). Beauty and the labor-market. The American Economic Review, 84, 1174-1194.

2 Praino, Stockemer, and Ratis (2014)

3 Wertheim, L. J. & Sommers, S. (2016). This Is Your Brain on Sports: The Science of Underdogs, the Value of Rivalry, and What We Can Learn from the T-Shirt Cannon. New York: Crown Archetype

4 An average of: Pro Football Focus, Todd McShay, ESPN, Mike Mayock,, Eric Edholm, Dan Kadar, Brent Sobles, CBS, Bucky Brooks, Daniel Jeremiah, Charley Casserly, Chad Reuter, Rob Rang, Dane Brugler, Kristopher Knox, Land of 10, Mel Kuiper, Luke Easterling, Chris Burke, Dieter Kurtenbach, NFL draft scout

Shut the F*%$ up about Sample Size

The analytics revolution in sports has led to profound changes in the way in which sports organizations think about their teams, players play the game, and fans consume the on-field product. Perhaps the best-known heuristic in sports analytics is sample size — the number of observations necessary to make a reliable conclusion about some phenomenon. Everyone has a buddy who loves to make sweeping generalizations about stud prospects, always hedging his bets when the debate heats up: “Well, we don’t have enough sample size, so we just don’t know yet.”

Unfortunately for your buddy, sample size doesn’t tell the whole story. A large sample is a nice thing to have when we’re conducting research in a sterile lab, but in real-life settings like sports teams, willing research participants certainly aren’t always in abundant supply. Regardless of the number of available data points, teams need to make decisions. Shrugging about a prospect’s performance, or a newly cobbled together pitching staff, is certainly not going to help the bottom line, either in terms of wins or dollar signs.

So the question becomes: How do organizations answer pressing questions when they either a) don’t have an adequate sample size, or b) haven’t collected any data? Fortunately, we can use research methods from social science to get a pretty damn good idea about something — even in the absence of the all-powerful sample size.

We measured team chemistry in the NBA. Unsurprisingly, Boogie is a bum

After talking about team chemistry forever, we finally put our science where our mouth is and developed a team chemistry monitoring system, the Team Chemistry Index (TCI). The TCI allows us to assess an individual player's, as well as an overall team's, level of chemistry. We created the TCI, which is a proprietary observational measure, by combining existing methods, measures, and team chemistry research. 

With an NBA team's backing, we conducted a test of the TCI by observing NBA the Sacramento Kings and Cleveland Cavaliers (among a few other teams) games through TV broadcasts. Given the recent blockbuster trade that sent Demarcus Cousins to New Orleans, we thought we'd divulge our findings on what the TCI told us about this controversial player.


Briefly, before we get to Boogie, a primer on the TCI. The aim of the TCI is to allow us to dive into players' psyches; it can also aid teams in predicting future performance, increasing insight into current players, and helping evaluate trades. Three components comprise each player's overall TCI score:

Overall TCI r
epresents a player’s individual contribution to team chemistry; it ranges from 0 (absolutely no chemistry) to 100 (extreme levels of chemistry). The emotive metrics (communication, support, and intensity) are weighted sub-scales of the TCI, and allow for a more detailed look at the dynamics of player behavior. We computed TCI scores using more than 5,000 total independent observations of player behavior in the first two months of the 2016-2017 NBA regular season. 


This probably isn't shocking, but Boogie didn't score particularly well on our TCI measure. Storming off into the stands, pouting with referees, and quarreling with almost everyone really didn't agree with our measures of team support or intensity.

Boogie had a high communication  score -- he was chatty. However, this may have been of a product of his team: at least in the early going, the Kings were a very chatty team; they had talkative, high TCI veterans on the roster: Ty Lawson, Matt Barnes, and Garrett Temple. We might expect Boogie to be less communicative on teams with other less communicative players, which certainly won't help his team chemistry.


To provide a comparison for Boogie's low score, let's take a look at the TCI of another superstar, Cavs point guard Kyrie Irving. Kyrie is one the most well-balanced player we've observed; he is an exemplar of a team player. Even as an all-star, Kyrie takes pride in his performance as a teammate. Whether he's on the court or on the sidelines, Kyrie's energy is directed toward his team. Indeed, Kyrie is like oxygen -- essential to his team's survival, and loves to bond with other elements.

It remains to be seen how Boogie fits in with New Orleans, and perhaps most centrally, Anthony Davis. With stories like this coming out already, our results seem likely to hold: Much like kryptonite, Cousins doesn't bond well.

Team chemistry study conclusion: Let's get defensive

What do you do when you're stuck on a shitty team and you can't leave? 

Comparing yourself to a better team's performance won't help your self-esteem; in fact, it might make you feel worse about your team and its prospects. But, what if, instead of focusing on wins and losses, you emphasize nonperformance aspects like academic performance, good looks, or general awesomeness? Research suggests that these comparisons should engender team-wide feelings of positivity, leading to a greater sense of identity, plentiful team chemistry -- and, ideally, increased performance.


In our last article, we gave a midseason report on the intervention that we were in the midst of conducting aimed at implementing some of these ideas. With the overall goal of increasing these teams' performance, we built a team chemistry intervention, and administered it to three Southern California college football teams via a series of surveys throughout the season. 

Our initial glance at the data showed promising returns -- a slight trend on offense, with an increase in points scored per gameand a stronger trend on defense, with a decrease in points allowed per game. Now that all the results are in, we thought an update was in order.

A little of this, a little of that

After 5 rounds of treatment, the final results are a mixed bag of the study are a mixed bag. First, the offensive trend seems to have washed out: all three teams' points scored per game were on par with their four year averages. 

More encouragingly, the defensive trend became a bit less dramatic, but remained consistent: Two of the three teams that participated in the study showed four year lows in points allowed per game. 

Some is good, more is better

These results represent an encouraging start. But to really increase our confidence in claims about team chemistry leading to better performance, we'll need much more evidence. Fortunately, that's exactly what's in the works -- just like Kahneman and Tversky, we're looking to be sports analytics disrupters

We're using team chemistry to increase performance for a few college football teams

There’s an old adage in psychology that says, “the best predictor for future behavior is past behavior.” This formula pretty much sums up how sports analytics works: analytical minds compile players' previous performances, and use them to build models to predict how they’ll perform in the future. Though this method seems to be pretty good, it’s far from the whole story.

Rob Arthur's recent piece on 538 was an amalgamation of some of the more reliable baseball prediction outlets (PECOTA, Vegas lines, and Fangraphs projections).
Together, the predicted win totals from these sites correlate well with the actual win totals of most baseball teams (r = .60). If we do a little math, we find that previous performance explains about 36% of the reason why baseball teams win as much or as little as they do. Now, if we knew 100% of the reason, we could flawlessly predict the number of wins for each baseball team; unfortunately, we've only got 36%. The result is that something other than previous performance accounts for around 64% of baseball team's performance1.

So the question becomes: Can we measure other stuff to add to the already awesome previous performance prediction models to increase our predictive power? If we said "Hell yes, you can!" would you call us crazy? You wouldn’t be the first. 

The future is now

First off, psychology as a discipline is quite large. There are plenty of sub-topics within psychology that can be brought to the professional sports landscape to help predict future performance. That said, research has time and time again shown that chemistry and performance are related. So, for this reason, we've decided to focus on this often talked-about (but not-so-often accurately measured) component of a team’s ability to stick together. 

Utilizing over 30 years of research, we designed a survey to measure and increase team chemistry; we are currently implementing it with a few community college football teams in Southern California, with the aim of increasing their performance. After 3 rounds of the survey, here’s a midseason report of how it’s going:
Each team has played 6 games thus far with an average opponents winning percentage at 47%
Each team has played 6 games thus far with an average opponents winning percentage at 47%
Looking at the 4 year averages, the overall trend indicates that increasing team chemistry has played some role in improving these teams' offenses and defenses. Before you get your scientist hat on, we know these findings are preliminary: It's a small sample of schools -- perhaps these teams would be performing this well without our survey2. There's also another 4 to 5 games left to play for each team, so things could change by the end of the year. However, it's a nice enough trend to get us excited about the intervention. Stay tuned for an end-of-season report to see if the results hold.

The million dollar pitch

Previous research in team chemistry has often occurred within college teams, so the mid-season results of this study are by no means revolutionary. What could be revolutionary is if professional teams begin to use this research: monitor players, build a team with players high in team chemistry, and design interventions to increase overall team chemistry. The idea is to chip away at that missing 64% of future performance that previous performance doesn't capture.

1 This assumes our predictive models built using previous performances can’t improve, right? We're capturing about 36% of the reason (or variance) of future performance right now in 2016, what's to say we can't get better in the future? True, the correlation coefficients have ranged from just over r = .8 to just below r = .3. So, at most we can unreliably predict upwards of 64% of the variance on a good year, which still leaves 36% unexplained. Predictive power has varied over the last 20 years, but the average guess is that previous performance only eats up around 36% of future performance.

2 Each team was randomly chosen and had to be a low-performing team in past.