HypeDetector
Try our Chrome Extension Get Pro — $6.99/month

How HypeDetector Scores a Video

Every score is the result of a structured analysis across six dimensions. Here is exactly what we check, why we check it, and where our analysis has limits.

What we analyze

Every analysis starts with two potential sources of data. The first is video metadata -- title, channel name, and upload date -- which is always available for any public YouTube video. The second is the video transcript, which we retrieve from public caption data when it exists.

Titles alone carry a significant amount of signal. The language a creator chooses for a title reflects what they are trying to get a viewer to feel before they even press play. Urgency words, income figures, guaranteed outcomes, and "secret knowledge" framing all appear in titles with measurable regularity in content that later proves to be misleading.

When a transcript is available the analysis goes considerably deeper. We can examine the internal consistency of claims, the ratio of specific evidence to vague assertion, the frequency of pressure language throughout the video, and whether the income math holds up when worked through in detail.

When a transcript is not available the score carries a lower confidence rating. The analysis is still useful -- title-only signals are real signals -- but we flag it clearly so you know the score is based on a partial picture.

The six scoring dimensions

The final score is a weighted combination of six distinct dimensions. Not every dimension applies to every video category, and weights shift slightly based on which sensitivity level you select.

1. Hype Language Analysis ~30% of score

This dimension scans for the density and pattern of urgency language, guaranteed-result framing, impossible timelines, and "hidden knowledge" structures throughout the title and transcript. The concern is not the presence of enthusiasm -- it is the systematic use of psychological pressure to bypass critical thinking.

Example trigger: "Make $10,000 in 15 minutes guaranteed -- they do not want you to know this"

2. Income Math Validation ~25% of score (money category)

When a video makes specific dollar claims we work the arithmetic. A stated monthly income is divided by working hours to produce an implied hourly rate, which is then compared against real-world benchmarks. A claim of $10,000 per month implies $62 per hour -- well above median wages but technically possible. A claim of $100,000 per month implies $625 per hour, which is above most surgeons' effective rates and demands scrutiny. This dimension only contributes to the score for videos in the money, crypto, and related financial categories.

Example: "$10,000/month = $333/day = $41/hour -- 4x median US hourly wage"

3. Creator Credibility Signals ~20% of score

Channel name and available creator metadata are analyzed for patterns associated with predatory versus legitimate creators. Wealth-signaling words in channel names, anonymous or pseudonymous operation, absence of verifiable credentials, and the presence of generic aspirational branding with no domain-specific expertise all contribute to this dimension. Legitimate educators typically have real names attached to verifiable professional histories.

Example trigger: Channel named "Cash Flow Secrets With Mike" with no verifiable business or credential

4. Evidence Quality ~15% of score

This dimension assesses whether claims are supported by named sources, peer-reviewed citations, or verifiable data versus personal anecdote, testimonial screenshots, or assertion alone. A video that says "studies show" without naming any study scores lower than one that cites a specific paper or institution. A video that shows a PayPal screenshot as "proof" scores lower than one that references audited financials or consistent long-term results with documented methodology.

Example trigger: "Studies show this works" with no named source, institution, or publication

5. Thumbnail Analysis ~10% of score, when available

When thumbnail image data is available, we check for visual patterns that appear disproportionately in misleading content: manipulated income screenshots displayed prominently, staged luxury props, shock-face expressions designed to maximize click anxiety, and cash or luxury items as primary visual elements. These patterns do not prove deception on their own, but they correlate strongly with content that relies on manufactured aspiration rather than genuine information.

Example trigger: Thumbnail showing a large income figure on a screen next to a surprised face and a sports car

6. Channel History ~10% of score, when channel data available

Where channel history is accessible, we look for escalating income claims over time, frequent niche-switching that suggests chasing trend cycles rather than genuine expertise, unusually high ratios of course or product promotion to informational content, and upload patterns that spike during known trend cycles. A channel that moved from fitness to crypto to AI to dropshipping in three years is a different proposition from one that has covered a single domain consistently for five years.

Example trigger: Channel that covered fitness in 2020, crypto in 2021, AI side hustles in 2023, all with similar income claims

What the score means

Scores run from 0 to 100, where 0 represents pure hype with no credible signals and 100 represents a video that passes every check cleanly. The five tiers and their implications are as follows.

0 -- 20
Extreme Hype
Multiple serious red flags present. Income math is implausible, evidence is absent or fabricated, and pressure tactics are aggressive and systematic.
21 -- 40
High Hype
Significant misleading signals. Claims are likely exaggerated, evidence is weak, and the structure of the content appears designed to sell rather than inform.
41 -- 60
Mixed
A combination of red flags and credible signals. The content may have genuine value but includes patterns worth scrutinizing before acting on any recommendation.
61 -- 80
Mostly Credible
Minor concerns only. The content appears well-grounded with no major red flags, though standard due diligence is still advisable before any financial or health decision.
81 -- 100
Credible
Passes all checks cleanly. Language is measured, claims are grounded, evidence quality is adequate, and no manipulation patterns were detected.

Known limitations

We want to be direct about where this tool can fail you.

The score carries lower confidence when no transcript is available. A title-only analysis is meaningfully less accurate than a full transcript analysis, and we display a clear indicator when that is the case.

AI models can miss cultural context, regional idioms, and deliberate sarcasm. A video that parodies hype culture with over-the-top language could score poorly even though the intent is satirical. We recommend treating low scores on clearly comedic or educational content with appropriate skepticism.

A high score does not guarantee a video is honest. A sophisticated creator who structures claims carefully and avoids overt hype language can still be misleading. Pattern detection is not the same as fact verification.

A low score does not guarantee a video is dishonest. Some legitimate creators use energetic, urgency-adjacent language without any intent to deceive. A score below 40 is a prompt to look more carefully, not a verdict.

We do not verify claims against external databases, court records, regulatory filings, or published research. Our analysis is based on patterns in the video content itself, not on independent investigation of the creator or the product being promoted.

Scores reflect automated pattern detection. They are not legal determinations, regulatory findings, or editorial opinions. Nothing here should be used to publicly characterize a creator as fraudulent.

What we are not

We are not a court, regulator, or fact-checking agency.

We do not guarantee a creator is honest or dishonest.

We do not offer legal, financial, medical, or investment advice.

HypeDetector is a structured second opinion -- a tool to slow down impulsive decisions by surfacing the patterns that misleading content tends to share. The judgment is always yours.

About the methodology

HypeDetector was built by a scientist with 10 years of protein biochemistry research experience and affiliations with Stanford University. The scoring methodology draws on documented patterns in consumer fraud research, FTC guidelines on deceptive marketing, and published literature on online misinformation tactics.

The weighting of each dimension and the score thresholds for each tier are calibrated against a manually reviewed dataset of videos spanning the categories the tool covers. That calibration is updated periodically as new patterns emerge and as the underlying AI model improves.

Questions about the methodology or suggestions for improving it can be sent through the contact page. We read everything.