I often criticize modern astrology on a Russian-language YouTube channel. Astrologer viewers often accuse me of showcasing either blatantly ignorant astrologers or absurd statements. At the same time, they’re convinced that they know exactly how to interpret a horoscope to reveal someone’s personality.
I prefer to rely on facts rather than self-assurance. That’s why I invited experienced, practicing astrologers—graduates of well-known astrology schools—to take part in a simple experiment. I asked them to match a personality description to one of five horoscopes.
[toc]
The Essence of the Experiment
A random person with no ties to astrology took a professional psychological test, the Myers-Briggs Type Indicator, and also provided me with their birth data with an accurate birth time.
Using this information, I cast their horoscope. In addition to this real one, I generated four random horoscopes—these most likely don’t even belong to real people, as I simply picked random years, dates, countries, and cities. In the end, I had five horoscopes: one genuine (matching the psychological profile), and four completely random.
The goal of the experiment is not to validate a particular school or standard of astrological interpretation, but rather to assess the overall state of contemporary astrology — its coherence, adequacy, and the practical applicability of its techniques on the open market.
For this reason, using a representative sample of randomly selected practicing astrologers is not only appropriate but essential. It reflects the diversity and real-world conditions in which New Age astrology is currently practiced, rather than an idealized or academically unified version of it.
The following paragraphs are for true geeks who want to dive into the statistics. If that’s not your thing, feel free to skip ahead to the section “Beginning of the Experiment”.
About the Statistics
A total of 94 astrologers participated in the experiment. Since participation was anonymous, the group included both beginners and experienced professionals. This diversity reflects the real situation in the astrology market quite well—we ended up with a representative sample that wasn’t limited to just novices or solely to seasoned experts.
We assumed that all astrologers use similar character analysis techniques, which are well known and form part of the core curriculum in any astrology school.
We’re working with two hypotheses to be tested:
- Null Hypothesis: astrologers guess the horoscope at random—there is no connection between personality and horoscope.
- Alternative Hypothesis: astrologers genuinely possess knowledge and do not choose the horoscope randomly.
Type I and Type II Errors
The Null Hypothesis assumes that if astrologers are guessing randomly, the number of correct answers should be about one-fifth of the total—that is, around 18–19 astrologers out of 94. In practice, this number may vary slightly—18, 19, or 20.
This raises an important question:
- If more than 30 astrologers give the correct answer, can this still be considered a coincidence, or does it indicate a pattern?
- And what if the number of correct answers exceeds 60—what then?
We need to determine the critical region—the minimum number of correct answers beyond which we can confidently say that the result is no longer random and actually reflects the astrologers’ ability to match a personality with a horoscope. Otherwise, we risk making a troublesome Type I Error (False Positive)—mistakenly rejecting the null hypothesis, i.e., seeing a pattern where none exists.
The answer to that question is just below.
Alternative Hypothesis: If astrologers are not choosing the horoscope at random, we can expect their accuracy to be higher than pure guessing—that is, higher than 1/5 (20%). It could be 50% (every second astrologer gets it right), or even 75% (three out of four). It all depends on how professional we believe the participants to be.
Since the survey was anonymous and there was no committee to verify the participants’ qualifications, I chose a fairly conservative estimate: I assume astrologers are demonstrating skill if they identify the correct horoscope at least 15% more often than random guessing. In other words, if the accuracy is at least 35%, that’s already a good result.
However, we must also consider another scenario: astrologers may indeed have the skill to assess personalities, but on the day of the experiment, they could have collectively had an off day—leading the results to appear indistinguishable from random guessing. A simple example: suppose only one highly qualified astrologer takes part in the experiment but happens to make a mistake on that particular day. We might then mistakenly conclude: “The method doesn’t work.” Such an unfortunate mistake is known as a Type II Error (False Negative)—accepting the null hypothesis when, in fact, a pattern does exist.
This type of error can be minimized by increasing the number of participants. The more astrologers take part, the lower the chance that all of them will fail purely by chance. But how many participants are needed to avoid a Type II Error?
What Counts as a Non-Random Guess?
The key question for the null hypothesis is: what is the minimum number of astrologers who must correctly identify the horoscope for us to stop considering the result a matter of chance?
Below is a chart showing the probability that at least N out of 94 astrologers will guess the correct horoscope. The horizontal axis represents the number N, and the vertical axis shows the probability. Let me remind you that this graph reflects the situation under the null hypothesis—that is, assuming each astrologer is guessing purely at random.
Note: You can calculate these probabilities yourself using Excel’s built-in function:
=BINOM.DIST.RANGE(94, 20%, N, 94)
where N is the number of astrologers (from 1 to 94), and 20% is the probability of guessing the correct horoscope by chance.
From the chart, we can observe the following:
- It is almost certain that at least one astrologer will guess the correct horoscope—this is pure chance.
- It is virtually impossible for all 94 astrologers to guess correctly—the probability is close to zero, and such a result would definitely not be random.
- The boundary between randomness and pattern lies around 19 correct answers—where the odds are about even.
In statistics, a result is typically considered non-random if the probability of it happening by chance is less than 5%. This is known as the statistical significance level. From the chart, we can see that this corresponds to about 26 correct answers. In other words, if at least 26 out of 94 astrologers identify the correct horoscope, we can confidently say that the result reflects a real pattern rather than chance.
Note: The critical region (i.e., the number 26) can also be calculated using this Excel function:
=BINOM.INV(94, 20%, 1 - 5%) + 1
where 5% is the significance level, 94 is the number of astrologers, and 20% is the guessing probability.
So, 26 or more correct answers out of 94 strongly suggest that we are observing real skill among astrologers—not just a coincidence.
How Many Participants Are Needed for Reliable Results?
Under the alternative hypothesis, we assume that astrologers choose the correct horoscope more often than random guessing would suggest. If, on average, 35% of the 94 astrologers (about 33 people) select the correct chart, this already points to a consistent pattern.
However, it’s important to rule out the possibility that experienced astrologers had a collective "off day," and the experimenter mistakenly interprets this as a lack of skill.
Below is a chart showing the probability of mistakenly interpreting a collective random failure as a lack of skill, depending on the number of participants. On the graph, the blue jagged line represents the likelihood of making this type of error, while the red line shows the maximum number of “errors” among N astrologers that is still considered acceptable—in other words, the maximum allowable collective mistake.
Note: You can build this chart yourself using Excel’s built-in function:
=BINOM.DIST.RANGE(N, 35%, 0, CR - 1)
where N is the number of astrologers in the experiment, 35% is the assumed accuracy under the alternative hypothesis (i.e., if astrologers have real skill), and CR is the critical region—the minimum number of correct answers above which the result is no longer considered random. It can be calculated as:=BINOM.INV(N, 20%, 1 - 5%) + 1
.
From the chart and calculations, we can observe the following under the alternative hypothesis:
- If only 1 skilled astrologer takes part, any result they produce will be seen as random—the probability of “missing” their systematic knowledge is 100%.
- If 2 astrologers participate and both are wrong, the chance of missing their skill drops, but still remains high—about 87%.
- With 27 participants, if 20 of them get it wrong, there's a 50% chance we’ll mistake this for a lack of skill.
- With 40 participants and 29 incorrect answers, the probability of overlooking real skill falls to 32%.
- The more participants involved, the lower the risk of making a Type II error—falsely concluding there's no knowledge due to collective failure.
In statistics, a 20% risk of a Type II error is usually considered an acceptable reliability threshold. In our case, this threshold is reached with approximately 59 participants.
In fact, 94 astrologers took part in our experiment—well above that threshold. This means that even if 70 of them got the answer wrong, the chance of overlooking real, systematic skill would be only 5%—a very low risk.
Beginning of the Experiment
So, to summarize:
- 94 astrologers took part in the experiment—enough to ensure that real knowledge wouldn’t be “overlooked,” even if up to 70 of them happened to make mistakes on the day of the test.
- We applied fairly lenient criteria for the astrologers, recognizing that not all of them are top-tier professionals. All that was required was for them to be at least 15% more accurate than random guessing.
- We established that if more than 26 astrologers give the correct answer, it indicates a real ability to match a personality description with the correct horoscope. If fewer than 26 answer correctly, then the results are indistinguishable from random guessing, suggesting that astrology, in this context, does not describe personality.
Experiment Results
Here is how the astrologers’ votes were distributed across the five horoscopes:
Astrologers chose the 4th horoscope the least. The 5th horoscope, on the other hand, turned out to be the clear outlier—it stood out and became the favorite among all options. What does this mean?
It indicates that the selection of the 5th horoscope was not random—astrologers were clearly guided by some method. But now comes the most important part: the correct answer.
The correct horoscope was Horoscope #4—this was the one that belonged to a real person. All the other horoscopes were fabricated and randomly generated by me. As you can see, only 5 astrologers (out of the required 26) chose this correct option. That’s far too few to claim there’s any meaningful connection between a horoscope and a personality profile. We can confidently say that no such connection exists.
Why Was the 5th Horoscope So Popular?
The personality description based on the MBTI test was filled with phrases like: “Reliable, structured, logical, practical.” Here’s an excerpt:
You approach life with a clear set of principles, valuing tradition, structure, and well-defined expectations. Your strength lies in creating order from chaos, methodically organizing information and resources to achieve tangible outcomes. Your logical, fact-oriented mindset serves you well in analytical tasks, though it may sometimes overshadow the emotional aspects of situations.
When I read this description, it immediately reminded me of traits often associated with Capricorn. As an experiment, I decided to test whether astrologers would be inclined to choose a horoscope based primarily on the Sun’s placement in Capricorn.
So, I generated the 5th random horoscope with the Sun in Capricorn, while in the other charts the Sun was in other signs. And indeed, 47 astrologers chose the 5th horoscope—nearly twice the critical region of 26 votes. This makes the result statistically significant and rules out chance.
What does this mean? It tells us two things:
- Although astrologers often claim they analyze the full chart—all planets, aspects, Jones patterns, house placements, and so on—in practice, they rely primarily on the Sun sign. In effect, they follow the approach of mainstream pop astrology found in tabloids.
- There is no actual connection between the position of the Sun in a zodiac sign and a person’s character. Everything written about zodiac signs, and everything astrologers rely on when interpreting them, is pure fiction.
Let me emphasize once again: our sample was representative of the typical practicing astrologer on today’s astrology market—and this is what the majority looks like.
As sad as it is to admit, modern astrology has failed yet another fact-check. Let me remind you, this is far from the first test that has demonstrated the inconsistency of contemporary astrological techniques.
Objections of Astrologers to the Experiment
After publishing the video with the results of this experiment, I received numerous responses from practicing astrologers. They fervently defended their methods and techniques, boiling their arguments down to four main points.
Point 1. The experiment is inherently “dishonest” and therefore invalid.
My response: Statistics does not recognize concepts like “dishonest,” “inspiring,” or “temperamental.” Newton’s binomial distribution is simply a mathematical formula embedded in Excel. Such subjective characteristics do not affect its mathematical correctness and therefore do not invalidate the experimental results.
Point 2. You involved random people, not real astrologers.
My response: If truly random people unfamiliar with astrology had participated, we would not have seen a statistically significant preference for the 5th horoscope. Forty-seven votes out of 94 indicate that respondents were using some general method, which, as demonstrated, proved to be flawed.
Point 3. Psychology is a pseudoscience, and the MBTI test, on which the personality description was based, is fiction.
My response: The MBTI is a standardized psychometric instrument, developed and repeatedly validated through numerous empirical studies. Its reliability (test-retest stability) and validity have been confirmed on large samples. Repeated testing shows high reproducibility of results in individuals with a stable personality, which supports the scientific basis of MBTI as a tool for describing psychological types.
Point 4. Astrology cannot be subjected to fact-checking; it is a spiritual, descriptive, and intuitive discipline.
My response: First, astrology historically developed as a prognostic discipline. Its mathematical apparatus—primary directions, house systems, spherical trigonometry formulas, and so on—was created as part of a predictive framework. It is hard to believe that the ancients devised logarithms and trigonometric functions merely to calculate spiritual goals.
Moreover, we have no evidence that astrology was ever used for anything other than forecasting (including elections) or medicine. What we now call New Age astrology is a relatively recent development—sun sign astrology only began to emerge in the late 19th century, whereas the predictive tradition stretches back thousands of years.
Secondly, if we abandon fact-checking altogether, we lose the only reliable method we have for distinguishing between imagination and reality. Comparing claims to observable reality is the only way humans can verify the truth of anything. This is especially relevant to astrology, which often claims practical, not merely theoretical or philosophical, utility. Therefore, subjecting its assertions to scrutiny is not only appropriate—it is necessary.
In conclusion, I note that none of the astrologers could explain why the 4th horoscope was the most accurate yet was almost completely ignored. Instead, there were attempts to attack the results head-on. This only further confirms the lack of serious arguments in defense of astrological techniques.
What to do next?
If you are currently at a crossroads—whether to study astrology, and if so, which kind—here are some practical guidelines:
1. Ask yourself an honest question: Are you ready to spend 5 years of your life and a significant amount of money on something that, as this experiment showed, does not work? Modern astrology, based on personality descriptions and Zodiac signs, does not withstand even the simplest validity tests. Why spend years learning techniques that do not yield verifiable results?
2. Instead, it’s better to study methods that allow for accurate, testable predictions. For example, in our school, we teach Renaissance astrology—a system developed before the advent of the “psychological” approach. These techniques allow you to forecast events with precision down to the day and even the hour. And most importantly, you can always compare the prediction with actual events and verify whether the method works.