The curious case of Political Intelligence
How do we decide which polls we aggregate, and which we don't?
Keen-eyed poll observers may have noticed a new poll floating around this week, conducted on behalf of Brendan Herrera, a Republican candidate for House in Texas’ 23rd congressional district. (Herrera is challenging incumbent Republican Tony Gonzales. Gonzales has been embroiled in scandal related to a sexual relationship he had with a staffer, who died by suicide in September.)
The poll, conducted by a firm called ‘Political Intelligence’, is not the first that the firm has done for Herrera. The first survey I noticed was conducted in December, and reported on February 13th by the Daily Caller. The second survey, which gained more attention, was reported by the New York Post on the 23rd. From there, it spread: it was retweeted by a reporter for the Texas Tribune, and further spread by news outlets like Axios and Politico.
Now, that’s all normal for normal poll coverage. But here’s the problem. I’ve spent two weeks trying to find out who conducts polls for “Political Intelligence.” I still have no idea, and the Herrera campaign hasn’t responded to my emails. I can’t email the company because I can’t find any information about them at all.
This is an issue for us here at 50+1 because we adhere to a strict set of methodological standards before aggregating polls from new polling firms. Part of that process is verifying the data came from a real pollster, was generated in a statistically sound way, and the pollster meets some minimal standards for disclosure like telling us who paid for the poll and the basic methodology by which it was conducted.
But in this case, we have been unable to find any of that information for “Political Intelligence”; the name of the firm is manifestly un-googleable (which isn’t actually all that uncommon in the shadier side of the polling universe). We don’t have any leads on who might be behind these surveys.
We’re all about transparency here at 50+1, and I felt this was a good opportunity to peel back the curtain on some of our research methodology. So without further ado, come on a journey with me as we try (and fail) to hunt down that which wishes to remain hidden.
The Hunt
Upon the release of the first poll on the 13th, I began a series of steps I would usually take when vetting a new pollster.
First, I reached out to my network of fellow poll hunters to see if any of them had any information. No dice.
Next, I tried googling, of course. But, dear reader, googling the phrase “political intelligence” yields all kinds of results, very few of which are useful. Go ahead, give it a try. After refining my search in various ways, I finally discovered a campaign firm called “Political Intelligence,” and thought I hit paydirt. But after reviewing their website, one thing seemed odd: this is a Canadian firm. Why would they be polling a Republican congressional primary in Texas?
However, this jogged something in my memory. Back in 2024, when I was working at FiveThirtyEight, I traced exactly these steps. I have seen Political Intelligence before, though the exact context is lost to wherever the archives of old FiveThirtyEight Slack messages are kept. I even recall reaching out to the Canadian firm to ask them if they conducted the poll, which they denied. I was unable to verify details about the company then, just as now. (For added security, I confirmed these details with a former coworker who also remembered that saga.)
Finally, in a last-ditch effort to get any information, I went through the Herrera campaign’s financial filings at the FEC website, to see if I could find any payments to Political Intelligence. Payments could include more information about the firm, like a physical address. But alas, it appears that many of the campaign expenses are going through a consulting firm, which is likely the entity hiring the pollster, rather than the funds being spent by the campaign account directly. This ensures that expenses remain opaque; the consulting firm is not required to submit filings to the FEC.
Look what you made us do
And that’s where I landed on the 13th. I decided at that time to just put it away as a mystery I may never solve, because the details I had from the Daily Caller article at the time were a bit thin, the poll was already two months old, and I wasn’t very optimistic about hearing back from the campaign: If a pollster is this hidden, it’s unlikely I’ll get any additional information.
Then came the 23rd, when the second survey surfaced in the New York Post. This one had a little more detail: attached to the report was a topline document. The new information here: a logo, which I promptly put through a reverse image search. I was hoping to turn up a website, but no such luck. All I revealed was a very short tweet referring to another survey conducted by the firm, with even less information than the other reports. But as to anything else, like a website or names of principal investigators, I seemed to be running out of moves.
Now, here is the dilemma. It is stated very clearly in the FiftyPlusOne methodology that per our ethical standards, “pollsters will disclose the names of their principals publicly.” That is to say, we are not in the habit of publishing surveys from completely anonymous organizations. There are rare exceptions to this rule (pollsters may have good reasons to ask me to keep their names off the record, and I may honor those from time to time), but on the whole, it’s important to us that the firms that we aggregate are on the up-and-up, and if we don’t know who is running them, we can’t vouch for that. However, given that this company appears to have been hired, routinely, by multiple campaigns, I do believe that this is a real polling firm conducting real polls that actually do ask questions to respondents and report their answers.
I wrestled with the question of whether or not to aggregate the polls. Other polling aggregators, who either know something I don’t know or perhaps have looser standards, have done so. (To my knowledge, we are the only aggregator that has a public set of standards for which polls we include, so I don’t know what methodology others are following.) Given the wider media distribution of the second survey, users of our site might be surprised or confused not to see it appear, and I do believe the surveys were conducted according to standard polling practices. On the other hand, it is explicitly prohibited by our stated methodology to aggregate data if we do not know who is behind it.
Ultimately, unless I hear back from the campaign, I have decided not to aggregate the polls. I feel badly about it, but I also feel that if standards exist, they should be followed. I trust our readers enough to understand why I made that decision, and hopefully this little missive helps you to trust us, too.
And finally, poll lovers, if you know anything about Political Intelligence, get in touch!


