Political prediction markets vs. political polls

Could political polls be replaced by political betting markets?

Photo by Nataliya Vaitkevich on Pexels.com

The rise of political betting is not just lucrative for the bettors and the platforms. Its advocates also hope that one day, it can replace the political prediction industry generally and remake the larger political media ecosystem. “[Traders are] incentivized with cold, hard cash to separate the emotion, to make a bet with their head rather than the heart,” said John A. Phillips, the CEO of the betting platform PredictIt. That means, the boosters argue, that they are more accurate than traditional polls and analyses of those polls…

But another group is paying attention to these platforms’ rise: those who have a special interest in political predictions, from campaigns to journalists to any number of groups and individuals who might be affected by the outcome of an election. Rather than relying solely on polling, punditry or counting yard signs, in advance of big election nights, the markets put together all available information and spit out a number that looks like the collective wisdom of a lot of people with money on the line. That can often lead to a number that is a more comprehensive reflection of a certain candidate’s chances of winning than any single poll or piece of political analysis…

While these markets’ long-term prediction capabilities can often equal or beat the predictions of most conventional polls, where they really have an edge is in rapidly responding to events…

But 2024 is just a single data point. In 2016, for example, prediction markets underrated Trump’s chances compared to the 538 model. With few major U.S. elections that have a lot of betting volume to study, the truth is that it’s still not possible to know for certain whether prediction markets can consistently outperform polling averages.

I wonder how much of this optimism about political betting is more about the perceived and real downsides of polling as of 2025. It has not preformed well in the last decade or so. Response rates are not good. There are lots of polls and polling companies claiming they can get a good poll. At what point does polling become so inadequate that media and others stop sponsoring polls and/or using the information? Or I could imagine a point in the next few years where a number of polls stop operating as several organizations show they get more accurate results.

It would also be interesting to know how much money there is to be made in prediction markets versus what is invested in the polling industry. Who wins when lots of actors are involved in either polls or predictions? Or when do regular Americans participate in political prediction markets?

And let’s see how academic studies of polls and prediction markets help shape the upcoming narratives about each. How much will careful studies help identify the strengths and weaknesses of each approach or are there are forces at work that will shape how people view these options?

Sanctifying Suburbia is out! Explaining the forces behind the evangelical embrace of the American suburbs

If observers in the United States in the late nineteenth century had to predict the geography of American evangelicals in the year 2000, what would they have said? Would they have foreseen an evangelical presence in the biggest cities? Important evangelical congregations, organizations, and institutions resided in New York City, Philadelphia, and Chicago. From these population centers (and ones that emerged in the twentieth century like Los Angeles or Dallas), evangelicals could reach the masses. Or would they have selected small towns and more rural areas? Perhaps they would have thought of evangelicals living in particular regions, in the kinds of places that would be called “the heartland” or “flyover country” or “the Bible Belt.” These places with a slower pace of life and traditional values may have aligned with everyday evangelical life.

I argue in Sanctifying Suburbia (out in paperback today!) that by the turn of the twenty-first century American evangelicals were firmly suburban. Evangelicals did not simply follow many other Americans to the suburbs (the country was majority suburban in the 2000 Census); evangelicals actively chose to locate in the suburbs.

Why? Multiple factors led to this and different chapters in the book discuss the components that contributed to the evangelical embrace of the growing American suburbs. The story includes:

  1. Racial and ethnic change in cities and evangelicals moving to whiter suburbs.
  2. The National Association of Evangelicals operating from suburban settings for much of its existence after its founding in the 1940s
  3. Locating in some evangelical clusters – like Wheaton and Carol Stream, Illinois and Colorado Springs, Colorado – that offered particular amenities and synergy between evangelical congregations and organizations.
  4. Seeing cities as incompatible with evangelical lifestyles and goals.
  5. An individualized view of engaging with places and society while also holding up heaven as the ultimate city/place.

And this is not just a story of the twentieth century; some of the seeds were sown prior to mass suburbanization and developed over decades.

Where does this leave American evangelicals in the third decade of the twenty-first century? As a whole, they may feel most comfortable in suburban settings where day-to-day life focuses on families in single-family homes, middle-class and populist activities and values rule the day, and attracting attendees and gathering resources from growing suburban populations occupies their organizational efforts.

Trying to prove (or disprove) the Infinite Monkey Theorem

A new paper suggests monkeys will have a hard time coming up with the works of Shakespeare:

Photo by Mike Bird on Pexels.com

The Infinite Monkey Theorem is a famous thought experiment that states that a monkey pressing random keys on a typewriter would eventually reproduce the works of the Bard if given an infinite amount of time and/or if there were an infinite number of monkeys.

However, in the study published in the peer-reviewed journal Franklin Open, two mathematicians from Australia’s University of Technology Sydney have rejected this theorem as “misleading” within the confines of our finite universe…

They took the assumption that the current population of around 200,000 chimpanzees would remain the same over the lifespan of the universe of one googol years (that’s 1 followed by 100 zeros). They also assumed that each chimpanzee would type one key per second for every second of the day, with each monkey having a working lifespan of just over 30 years.

Using these assumptions, the researchers calculated that among these randomly-typing monkeys, there is just a 5% chance that a word as simple as “bananas” would occur in the lifespan of one chimpanzee…

“By the time you’re at the scale of a full book, you’re billions of billions of times less likely,” he continued.

Perhaps this needs to be updated for today’s world: instead of animals, why not consider an infinite number of machines randomly producing text? Could they do the work of monkeys much faster and eventually converge on Shakespeare?

I also wonder how this parallels thinking about what humans or societies can accomplish. Given infinite or finite sets of resources, what can be produced? If humans have X amount of resources over X amount of time, how likely is it that a particular issue can be solved or a particular innovation will emerge or a particular problem will arise? Such predictions would rely on estimating probabilities, something that is very hard to do given forecasting future conditions and possibilities.

When mortgage rates do not decrease as expected

The Fed cut interest rates. Mortgage rates did not go down; they went up:

Photo by Robin Schreiner on Pexels.com

Since Fed Chair Jerome Powell lowered interest rates by 50 basis points on September 18, the average 30-year fixed mortgage rate has moved higher, not lower.

According to data from Mortgage News Daily, the average 30-year fixed mortgage rate has jumped about 47 basis points since the Fed rate cut, to 6.62% from 6.15%…

Going forward, the situation hinges on the Fed’s rate-lowering schedule. At present time, market expectations — as calculated by the CME FedWatch tool — are for two more 25-basis-point cuts this year.

Whether that will manifest itself in lower mortgage rates is up in the air. Two major upcoming events are the Consumer Price Index release this Thursday, as well as the October jobs report in the first week of November.

Life does not always go as predicted. However, this saying does make it easier to work with the unexpected happenings. And with large-scale systems, lots of people might hold an expectation or be told something will happen. With all the moving pieces in the financial system (plus its interactions with other parts of the world), patterns can change or there can be exceptions to regular patterns. Since home sales are an important part of economic, social, and community life, any changes like these have ripple effects. If it slows down home purchases and selling, this affects a lot of actors.

One question to ask is whether there are certain periods or conditions when the predictable is less likely to happen. Is this rise in rates when they were expected to go down a one-time occurrence or part of broader instability? How predictable are mortgage interest rates given particular circumstances?

One prediction that Dallas/Fort Worth-Houston-Austin will replace New York-Los Angeles-Chicago by 2100

moveBuddha has a prediction about which three US cities will have the most people by the end of this century:

Photo by Pixabay on Pexels.com
  • The future belongs to Texas.  America’s three biggest cities by 2100 will be #1 Dallas, #2 Houston, and #3 Austin. Fast-growing San Antonio also ranks at #11.
  • The Sunbelt keeps rising. Phoenix is projected to be the 4th-biggest U.S. city by population in 2100. Other Sunbelt cities in the top 10 are #6 Atlanta, #9 Orlando, and #10 Miami.
  • NYC and L.A. are currently the top two biggest U.S. cities, but they’re projected to fall to #5 and #7, respectively, by the year 2100.

The methodology to arrive at this?

We wanted to know at moveBuddha what U.S. metropolitan areas would see the biggest population growth by 2100. We did this by using the compound annual population growth rate of the biggest U.S. metro areas (250,000 residents or more) between the 2010 and 2020 U.S. Census estimates and extrapolating it over 80 years.

This was an inexact science, and growth rates are bound to change. But it gave us a rough idea of which American cities may rise to the top by the dawning of the 22nd century. Climate change effects, migration patterns from climate change, and other unforeseen events could change things.

Two parts of this projection seem implausible to me. First, extrapolating the current rates of growth to last for more than seven decades. Growth rates will likely rise or fall across different metropolitan regions. It is hard to imagine many places will be able to keep up high rates of growth for that long. Second, the size of these regions. There is no US region currently near the predicted populations in 2100. Would this come from significant increases in density in the central areas or even more sprawling regions? It would be interesting to see where all those people would live and work.

Of course, at this point it is hard to bet against the ongoing population growth of the Sunbelt.

And what would this do to the status of New York City and Los Angeles? Chicago has some experience with this but could NYC handle this well?

Punxsutawney Phil is worse than a coin flip in predicting the weather

How well does Punxsutawney Phil predict the duration of winter? Not so well according to one source:

Photo by Oleg Mikhailenko on Pexels.com

Phil’s track record is not perfect. “On average, Phil has gotten it right 40% of the time over the past 10 years,” according to the National Centers for Environmental Information, a division of the National Oceanic and Atmospheric Administration, which manages “one of the largest archives of atmospheric, coastal, geophysical, and oceanic research in the world.”

The three-month temperature outlook for February through April 2023 calls for above normal temperatures across the eastern and southern US and below normal temperatures for the northwestern US, according to the Climate Prediction Center…

Despite his mixed record when it comes to actually forecasting the weather, there’s no doubt Phil’s fans still hold him in high regard.

After all, his full title is Punxsutawney Phil, Seer of Seers, Sage of Sages, Prognosticator of Prognosticators, and Weather Prophet Extraordinary.

In other words, Phil is worse than a coin flip in predicting the coming weather. This is not good; any expert would hopefully be better than that.

However, there is some evidence that many expert predictions about the future are not great. How well can people predict the future performance of the stock market or natural disasters or geopolitical change? Not so well. And it is not just that it is difficult to predict the future; we think we can predict the future so it can be even more damaging when projections are wrong.

I suspect very few people care if Punxsutawney Phil is right or wrong. They like the tradition, the ritual, a festive gathering in the middle of winter. Still, Phil offers a window into our own abilities and confidence about knowing the future…and it is a cloudy window at best.

Does predicting bad Thanksgiving traffic and airport congestion change people’s behavior?

Each year, INRIX releases a report regarding Thanksgiving congestion. Here are predictions for the Chicago area:

Photo by Miguel Barrera on Pexels.com

Along with Chicago, highways in Atlanta, New York City and Los Angeles will be the busiest, according to data analytics firm INRIX. To avoid the worst congestion, INRIX recommends traveling early Wednesday, or before 11 a.m. Thanksgiving Day…

The report predicts almost double the typical traffic along a westbound stretch of Interstate 290 from Chicago’s Near West Side to suburban Hillside, peaking between 3 and 5 p.m. Wednesday. Significant additional congestion is expected Wednesday along the same stretch of eastbound I-290, with an anticipated 84% spike in traffic…

The outlook isn’t much brighter for travelers flying out of the city. Chicago-based United Airlines expects O’Hare to be its busiest airport, with more than 650,000 customers anticipated for the holiday. It reports Sunday, Nov. 27, will be its busiest travel day since before the pandemic, with 460,000 travelers taking to the skies. Nationwide, the airline awaits more than 5.5 million travelers during the Thanksgiving travel period, up 12% from last year and nearly twice as many as in a typical November week…

“Regardless of the transportation you have chosen, expect crowds during your trip and at your destination,” Twidale said. Travelers with flexible schedules should consider off-peak travel times to avoid the biggest rush.

The last paragraph quoted above might be key: given the predicted congestion, will people choose different times and days to travel? And, if enough people do this, will the model be wrong?

Transportation systems can only handle so many travelers. Highways attempt to accommodate rush hour peaks. Airports try to handle the busier days. Yet, the systems can become overwhelmed in unusual circumstances. Accidents. Pandemics. Holiday weekends where lots of people want to go certain places.

If enough people hear of this predicted model, will they change their plans? They may or may not be able to, given work hours, school hours, family plans, pricing, and more. These are the busiest times because they are convenient for a lot of people. If all or most workplaces and schools closed for the whole week, then road and air traffic might be distributed differently.

I would be interested in hearing how many people need to change their behavior to change these models. If more people decide they will leave earlier or later Wednesday, does that mean the bad traffic is spread out all day Wednesday or the predicted doomsday traffic does not happen? How many fliers need to change their flights to Saturday or Monday to make a difference?

If people do change their behavior, perhaps this report is really more of a public service announcement.

The difficulty of collecting, interpreting, and acting on data quickly in today’s world

I do not think the issue is just limited to the problems with data during COVID-19:

Photo by Artem Saranin on Pexels.com

If, after reading this, your reaction is to say, “Well, duh, predictions are difficult. I’d like to see you try it”—I agree. Predictions are difficult. Even experts are really bad at making them, and doing so in a fast-moving crisis is bound to lead to some monumental errors. But we can learn from past failures. And even if only some of these miscalculations were avoidable, all of them are instructive.

Here are four reasons I see for the failed economic forecasting of the pandemic era. Not all of these causes speak to every failure, but they do overlap…

In a crisis, credibility is extremely important to garnering policy change. And failed predictions may contribute to an unhealthy skepticism that much of the population has developed toward expertise. Panfil, the housing researcher, worries about exactly that: “We have this entire narrative from one side of the country that’s very anti-science and anti-data … These sorts of things play right into that narrative, and that is damaging long-term.”

My sense as a sociologist is that the world is in a weird position: people expect relatively quick solutions to complex problems, there is plenty of data to think about (even as the quality of the data varies widely), and there are a lot of actors interpreting and acting on data or evidence. Put this all together and it is can be difficult to collect good data, make sound interpretations of data, and make good choices regarding acting on those interpretations.

In addition, making predictions about the future is already difficult even with good information, interpretation, and policy options.

So, what should social scientists take from this? I would hope we can continue to improve our abilities to respond quickly and well to changing conditions. Typical research cycles take years but this is not possible in certain situations. There are newer methodological options that allow for quicker data collection and new kinds of data; all of this needs to be evaluated and tested. We need better processes of reaching consensus at quicker rates.

Will we ever be at a point where society is predictable? This might be the ultimate dream of social science if only we had enough data and the correct models. I am skeptical but certainly our methods and interpretation of data can always be improved.

Thinking about probabilistic futures

When looking to predict the future, one historian of science suggests we need to think probabilistically:

Photo by Tara Winstead on Pexels.com

The central message sent from the history of the future is that it’s not helpful to think about “the Future.” A much more productive strategy is to think about futures; rather than “prediction,” it pays to think probabilistically about a range of potential outcomes and evaluate them against a range of different sources. Technology has a significant role to play here, but it’s critical to bear in mind the lessons from World3 and Limits to Growth about the impact that assumptions have on eventual outcomes. The danger is that modern predictions with an AI imprint are considered more scientific, and hence more likely to be accurate, than those produced by older systems of divination. But the assumptions underpinning the algorithms that forecast criminal activity, or identify potential customer disloyalty, often reflect the expectations of their coders in much the same way as earlier methods of prediction did.

Social scientists have long hoped to contribute to accurate predictions. We want to both better understand what is happening now as well as provide insights into what will come after.

The idea of thinking probabilistically is a key part of the Statistics course I teach each fall semester. We can easily fall into using language that suggests we “prove” things or relationships. This implies certainty and we often think science leads to certainty, laws, and cause and effect. However, when using statistics we are usually making estimates about the population from the samples and information we have in front of us. Instead of “proving” things, we can speak to the likelihood of something happening or the degree to which one variable affects another. Our certainty of these relationships or outcomes might be higher or lower, depending on the information we are working with.

All of this relates to predictions. We can work to improve our current models to better understand current or past conditions but the future involves changes that are harder to know. Like inferential statistics, making predictions involves using certain information we have now to come to conclusions.

The idea of thinking both (1) probabilistically and (2) plural futures can help us understand our limitations in considering the future. In regards to probabilities, we can higher or lower likelihoods regarding our predictions of what will happen. In thinking of plural futures, we can work with multiple options or pathways that may occur. All of this should be accompanied by humility and creativity as it is difficult to predict the future, even with great information today.

Zillow sought pricing predictability in the supposedly predictable market of Phoenix

With Zillow stopping its iBuyer initiative, here are more details about how the Phoenix housing market was key to the plan:

Photo by RODNAE Productions on Pexels.com

Tech firms chose the Phoenix area because of its preponderance of cookie-cutter homes. Unlike Boston or New York, the identikit streets make pricing properties easier. iBuyers’ market share in Phoenix grew from around 1 percent in 2015—when tech companies first entered the market—to 6 percent in 2018, says Tomasz Piskorski of Columbia Business School, who is also a member of the National Bureau of Economic Research. Piskorski believes iBuyers—Zillow included—have grown their share since, but are still involved in less than 10 percent of all transactions in the city…

Barton told analysts that the premise of Zillow’s iBuying business was being able to forecast the price of homes accurately three to six months in advance. That reflected the time to fix and sell homes Zillow had bought…

In Phoenix, the problem was particularly acute. Nine in 10 homes Zillow bought were put up for sale at a lower price than the company originally bought them, according to an October 2021 analysis by Insider. If each of those homes sold for Zillow’s asking price, the company would lose $6.3 million. “Put simply, our observed error rate has been far more volatile than we ever expected possible,” Barton admitted. “And makes us look far more like a leveraged housing trader than the market maker we set out to be.”…

To make the iBuying program profitable, however, Zillow believed its estimates had to be more precise, within just a few thousand dollars. Throw in the changes brought in by the pandemic, and the iBuying program was losing money. One such factor: In Phoenix and elsewhere, a shortage of contractors made it hard for Zillow to flip its homes as quickly as it hoped.

It sounds like the rapid sprawling growth of Phoenix in recent decades made it attractive for trying to estimate and predict prices. The story above highlights cookie-cutter subdivisions and homes – they are newer and similar to each other – and I imagine this is helpful for models compared to older cities where there is more variation within and across neighborhoods. Take that critics of suburban ticky-tacky houses and conformity!

But, when conditions change – COVID-19 hits which then changes the behavior of buyers and sellers, contractors and the building trades, and other actors in the housing industry – that uniformity in housing was not enough to easily profit.

As the end of the article suggests, the algorithms could be changed or improved and other institutional buyers are also interested. Is this just a matter of having more data and/or better modeling? Could it all work for these companies outside of really unusual times? Or, perhaps there really are US or housing markets around the globe that are more predictable than others?

If suburban areas and communities are the places where this really takes off, the historical patterns of people making money off what are often regarded as havens for families and the American Dream may continue. Sure, homeowners may profit as their housing values increase over time but the bigger actors including developers, lenders, and real estate tech companies may be the ones who really benefit.