A Word About Polls

Pollsters and those who fund them must ask themselves: If 59 to 70% of the population in any given year believes that your media outlet pushes biased/corrupt propaganda† – would that not mean that ANY poll conducted in your name will automatically feature risk of highly skewed sample data? A scientist might think so; might think so for fourteen specific reasons listed herein.
Moreover, a deluded political group might strategically use such an effect to their advantage (agency, not bias). They might want to even swing an election by means of exploiting, but not acknowledging, such an effect. This reality exemplifies the nature of the new poll and electorate gaming underway in American politics.

70-year-old-white-femaleI actually had nine of the fourteen points inside this blog article’s critical redress assembled weeks before the 2016 US Presidential Election. However, I had not yet gathered all the material and example evidence I desired, in order to elucidate and support each point. That election in particular, and the vast disparity between what polls indicated and the election’s final demographic breakout (not to mention delegate breakout) provided much of the basis for this post.

Thus, at risk of appearing to boast of predictive power through having election-year 2020 hindsight, I want to point out several factors which notoriously influence especially political polls, into reflecting unanticipated bias or purposeful agency. Pew Research has provided an excellent outline of the dangers of collection and analytical bias here. While most of my material presented herein is derived from sampling bias I have observed in demographic and customer studies over the years (customer surveys, A-B testing, confidence interval significance, random sampling, etc.), it does align well with the Pew Research principles defined herein (and footnotes 1, 2, 3, 4 Pew Research).

When a survey is accidentally or consciously skewed to the direction of a single party’s gain – this is bias.

When a poll is crafted so as to sway its outcome to influence a democratic vote – this is agency.
When conducted upon a population under tyranny or at risk of calamity – this is a human rights crime.

If there existed any question about the political goals entailed inside the Leftist-oriented Agenda of Social Skepticism, let’s dispense with that notion now. Such agency can easily be inferred through a direct observation of the 2016 US Presidential Election. It became poignantly clear in the aftermath of the election that Social Skepticism is not by any means a democratic movement. The same tactics which Social Skepticism applies inside enforcement of pretend-science dogma (and wild claims to consensus), are the same exact tactics employed inside election poll-taking and the fake protests ongoing across the US right now. They both bear features of manipulation by agenda-bearing forces and hate-based activism against specific races, genders and demographic. These are the same tactics employed by the shills who infest social media and are paid to push dogmatic pseudo-scientific messages, all relating to one set of corporate/political goals and one single religion. These are tactics, tradecraft, signature practices developed and implemented by the same minds behind most of the expressions of tyranny innocent citizens face today.

Employing that segue into pseudoscience, let us examine one tactic of social manipulation which is practiced by Social Skepticism. This tactic is the art of poll and consensus manipulation. Polls in American politics notoriously skew to the Left, towards the hate and talking points agendas of Marxist and class warfare ideologies. They also claim to incorporate ‘science’ in their collection and statistical protocols – no surprise there. The astute American citizen has learned that nothing could be further from the truth. Below we identify fourteen specific reasons why polls are notoriously unreliable, especially polls generated for the sole purpose of effecting and influencing the outcome of an election.

Poll Skewing Factors

Well known in industry, but ignored by ‘statisticians’ in highly contested or manipulated public polls:

I.  Means of Collection – bias-infusing polls use exclusively land line phones as their channel and means of respondent communication – a tactic which is notorious in excluding males, mobile professionals and the full time employed. (2 Pew Research)

II.  Regional Bias Exploitation – call sampling is conducted in the New England states or in California, reflecting a bias towards tax oriented businesses, such as healthcare, insurance, government offices, and the corporations who work and contract with such agencies. (4 Pew Research)

III.  Bradley Effect – people have a tendency to express opinions and intent which fit a social pressure model or keep themselves out of the ‘bad guy’ bucket when polled on polarizing issues. This tends to skew polls notoriously to the left. (1 Pew Research)

IV. Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group or the entailed issue (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair. (5 Political Hay)

Green Eggs and Ham (Poll) Error

/philosophy : sentiment : analytics : error/ : The combined Crate-Bradley Effect in polling error. Including sentiment of those who have never heard of the topic. Including responses from those who know nothing about the topic, but were instructed to throw the poll results. Finally, treating both of these groups as valid ‘disagree/agree’ sentiment signal data. The presence of excessively small numbers of ‘I don’t know’ responses in controversial poll results. There exists an ethical difference between an informed-yet-mistaken hunch, versus making a circular-club-recitation claim to authority based upon a complete absence of exposure (ignorance) to a topic at all. In reality, the former is participating in the poll, the latter is not. The latter ends up constituting only a purely artificial agency-bias, which requires an oversampling or exclusion adjustment. One cannot capture a sentiment assay about the taste of green eggs and ham, among people who either don’t even know what green eggs and ham is, or have never even once tasted it because they were told it was bad.

V. Crate/Bradley Power Effect – the misleading impact of the Crate and Bradley Effects falsely convinces poll administrators of the power they hold to sway the opinion of ‘undecideds’ and misleads their sponsors into funding more and more polls which follow the same flawed protocols and traps. (5 Political Hay)

VI. Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

VII.  Trial Heat – the overall pressure which is placed on respondent results based on the structure of or questions inside the poll itself (1 Pew Research)

a.  Leading preparatory questions – employing questions which are pejoratively framed or crafted to lead the poll respondent, in order to skew undecided voters, prior to asking the core question, and

b.  Iterative poisoning – running the same poll over and over again in the same community and visibly publishing the desired results – akin to poisoning the jury pool.

VIII.  Crazy-8 Factor – for any question you pose, there is always a naturally errant 8 percent quotient1 who do not understand, don’t care, or purposely screw with you, or really think that gravity pulls up and not down. All which has to be done, to effect a 2 – 4 percentage point skew in the data – is bias the questioning so that the half of the Crazy-8 which disfavors your desired result, are filtered out through more precise or recursive questions – which are not replicated in the converse for the other half of the Crazy-8 which favor your desired result. The analytics which detect this poll manipulation is called a ‘forced-choice slack analysis’ – which examines the Crazy-8 and/or neutral respondents to see if they skew to the a bias in any particular direction.

IX.  Form of Core Questionasking different forms of THE CORE question than is implied by the poll, or different question by polling group. 1. Who do you favor, vs. 2. Who will you vote (will vote) for? vs. 3. Who do you think will win? (3 Pew Research)

X.   Follow Through Effect – only 35 to 55% of people who are polled, on average, will actually turn out to vote. (6 2016 General Election Turnout)

XI.  Oversamplingdeclaring a bias to exist in a population a priori, in the larger S pool from which an s sample is derived. Then further crafting a targeted addition of population members from S, to influence sample s in the opposite signal (direction and magnitude) from the anticipated bias. (1, 4 Pew Research)

XII. Influencing Effect – the technique of a polling group to release preliminary polling results during the influencing stage of iterative polling (for example, election sentiment). Results which do not fully reflect all the data they have gathered yet, rather target implanting a specific perception or message in the mind of the target polling population. Thereafter, to subsequently show results which include all collected data during the critical actual measurement phase, or in the anticipated completion stages (fictus scientia – see at end of this article).

XIII.  Gaussian Parametrization – the error made by statistical analytical processors of polling data, in which they assume that humans reliably follow a Gaussian distribution. Therefore smaller sample sizes can be used reliably to parametrize the whole.

XIV.  Early Poll Bias/Tip-in – polls early in a process, election cycle or addressing a question for the first time, tend to reflect the bias of the poll sponsors or developers to a more hyperbolic degree. Early election and primary returns will always favor the Left and then hone gradually in to a more accurate representation as time progresses and other competitive polls force them to come clean. Their final poll is always justifiable, and the earlier polls are simply portrayed as resulting from changes in participant sentiment (which is usually baloney).

democracyIronically, item XI. above, Oversampling is typically addressed in the Notes section of the polling analytical reports. However, such oversampling signal compensation typically only is practiced as a means to address prima facia and presumed S-pool biases, and rarely reflects any adjustment attributable to items I – XII above.

Until polls are conducted by low profile, scientific, unbiased collection and analytical groups, and not these agenda-laden parties listed below, they will continue to mislead – and to be used as a lever in this pretense to effect a political end-game. For the record, below are the polls indicating both the retraction-back-to numbers the day before the election (reflecting the shock of the early voting results which had them pare back their wild landslide victory they had predicted for Clinton). In other words, the poll models never actually resulted in the final November 7 differential – as that was a manual intervention in panic – so that the models did not look so badly errant in the end.

A note about models and prediction:  If you adjust and tweak your model or its parameters, so that it now results in numbers which are more in concert with actual early return data – you have not increased the predictive reliability of your model. Simulation and modeling professionals get this – poll statisticians do not.

Enjoy a laugh, but remember – these are the same people and the same methods, which are employed to advertise to you what it is indeed that scientists think. (7 Real Clear Politics). But such conclusions are derived with much less confidence bearing methods of data collection, as are even election polls. Also, for the record, as of Feb 16th 2017 at 11:03 am PST, the final outcome of the popular vote was Clinton 65,853,516 – Trump 62,984,825; a 2.2 percentage point Clinton edge, with respect to the number conventions used below. So no one below really got the final results right, with the exception of the conservative IBD/TIPP tracking poll for a Trump Clinton race only. (Source: CNN Election Results Update, 2/16/2017 Election update).

The average Clinton skew (below right column) was 6.7% in favor of Clinton during the course of the polling and election influence timeframe. The final poll results then un-skewed back to 3.3% by the time of early returns voting. Where the poll ended does not count since, that is a display to save scientific face (fictus scientia). It is where the poll resided during its influencing timeframe, which counts. What is clear, is that the polling firms were exaggerating their Clinton lead results by a 2:1 magnitude during the critical opinion influencing period. Then subsequently retracted to a 1.1 point actual unacknowledged bias or 2.6% end state methodological bias.

Actual Final 2016 Election Result                                                               Clinton +2.2             (average skew = 4.5 points left bias or a 8.7% error rate or bias)

skewed-polls

epoché vanguards gnosis


†  The 2015 State of the First Amendment Survey, conducted by the First Amendment Center and USA Today; 7/03/2015

1  Pew Research: U.S. Survey Research, Election Polling; http://www.pewresearch.org/methodology/u-s-survey-research/election-polling/

2  Pew Research: U.S. Survey Research, Collecting Survey Data; http://www.pewresearch.org/methodology/u-s-survey-research/collecting-survey-data/

3  Pew Research: U.S. Survey Research,, Questionnaire Design; http://www.pewresearch.org/methodology/u-s-survey-research/questionnaire-design/

4  Pew Research: U.S. Survey Research,, Sampling; http://www.pewresearch.org/methodology/u-s-survey-research/sampling/

5  Political Hay: How Poll Bias Obscures Trump’s Likely Election; https://spectator.org/how-poll-bias-obscures-trumps-likely-election/

6  2016 General Election Turnout Rates; http://www.electproject.org/2016g

7  Real Clear Politics http://www.realclearpolitics.com/epolls/latest_polls/