Academy ›
Module 02 — Product Research — Finding Winners Before Everyone Else
The Majorka Winning Score Explained
8 min · interactive · Intermediate
Total order count is the metric beginners use. The Majorka Winning Score is the metric operators use. The difference is whether you ride a curve up or arrive at its wake. Today: what goes into the score, what it means, and how to filter on it.
Why we built the Winning Score
Every spy tool ranks products by a single metric — usually total orders or total ad spend. Both are lagging indicators. A product with 80,000 lifetime orders may have done all of them six months ago. A product with high ad spend may have a high CPM and bad ROAS.
The Winning Score is a composite. It weights five inputs to produce a single number 0-100 that reflects current opportunity, not legacy size.
The five inputs
The score is computed daily across the entire Majorka catalogue. The weights:
| Input | Weight | What it captures |
|---|
| Velocity | 40% | Order acceleration (7d / 14d / 30d) |
| Margin Potential | 20% | Retail-to-COGS gap, accounting for shipping |
| Supplier Reliability | 15% | Store age, feedback, dispute rate, multiplicity |
| Market Demand | 15% | Search trend, ad-library density, social signal |
| Competition Density | 10% | Number of advertisers / saturation index |
Note: competition density is negative — high competition lowers the score. The signal is "where is opportunity available," not "where is the most volume."
What each score range means
- 90-100: Active winner in growth phase with healthy margins, multiple suppliers, growing demand and not-yet-saturated competition. Rare. Maybe 50-150 products in the catalogue at any time. Move fast.
- 75-89: Strong candidate. Probably one input slightly weak (saturation moderate, or margin tight). Good shortlist material with verification.
- 60-74: Mixed signal. Possibly strong on velocity but weak on supplier, or strong on margin but late curve. Investigate but don't lead with these.
- Under 60: Pass. Multiple weak inputs. Plenty of better candidates.
Why velocity is weighted highest
Velocity is the leading indicator for everything else. A product accelerating in orders typically:
- Has fresh supplier interest (more sellers list it)
- Has rising demand (search trends inflect)
- Hasn't yet hit max saturation (advertisers are still discovering it)
Catch velocity at 70+ and the other metrics typically follow. Wait until margin and reliability look perfect and velocity has often peaked.
How to use the score in workflow
The score is a filter, not a decision. It eliminates 90% of the catalogue so you can focus your manual evaluation on the top 10%. Workflow:
- Open Products in Majorka.
- Set filter: Winning Score ≥ 80, AU Relevance ≥ 70, Category = your target.
- Review the top 30 results. Apply the seven-signal scan from Lesson 1.5.
- Cross-check 5 candidates against TikTok and Meta Ad Library.
- Narrow to top 3 for sample-ordering or launch.
This pipeline takes ~60-90 minutes. It replaces a 10-hour manual scan of AliExpress that produces worse results.
What the score does NOT capture
Important. The score is a quantitative signal. It cannot read:
- Brand fit for your store (a high-score watch won't fit a kitchen brand)
- Creative producibility (is the product visually demonstrable?)
- Cultural fit for your market (some products work in US but not AU due to lifestyle/regulation)
- Your personal expertise (you'll execute better in categories you understand)
A 92-score product in a category you have no creative for will do worse than a 78-score product in a category you can shoot a UGC video for tomorrow morning.
Velocity + score together
The most useful filter is velocity in combination with score:
- High score + rising velocity = current opportunity
- High score + flat velocity = was a winner, now a wake
- Mid score + rising velocity = early signal, keep watching
- Low score + falling velocity = obvious skip
Majorka surfaces both numbers on every product row. Reading them together is what separates an operator from a beginner.
How the score updates
The score is recomputed daily as new pipeline data flows in. A product that was 76 yesterday might be 84 today if velocity inflected, or 71 if a major competitor entered the category. Trust recent scores; treat scores from 7+ days ago as stale.
Why this matters
Most beginners drown in product research because they have no filter — they scroll AliExpress for 4 hours and pick something that "looks cool." The Winning Score collapses millions of AliExpress listings into a usable shortlist of 50-100 candidates, ranked by current opportunity. It is not magic — it is composition of signals you would otherwise compute manually, badly, slowly. Use it as a filter every time you research.
How an operator went from 6-hour research sessions to 45-minute shortlists
Before Majorka: an operator in Adelaide spent 4-7 hours per week scrolling AliExpress, opening 60+ product pages, manually noting orders / store age / supplier count in a spreadsheet. He found ~3 candidates per session. Hit-rate (candidates that ran profitable): about 1 in 12.
After Winning Score: same operator opens Products, filters Score ≥ 82, AU Relevance ≥ 75, sorts by Velocity descending. Top 25 results in 8 seconds. He runs the seven-signal scan on each in 60 seconds — total review time 30 minutes. Adds 5-7 candidates to shortlist. Total session: 45 minutes.
Hit-rate measured over 6 months: 1 in 6 — twice as good as manual research. Why? Because the Winning Score filters out the 80% of AliExpress products that look fine on a thumbnail but fail on velocity, supplier age, or saturation. The remaining 20% is where real opportunity lives.
Time saved: ~5 hours per week. Hit-rate improvement: 2x. Same operator, same skill, different filter.
Action items
- Open Products. Set filter: Winning Score ≥ 80, AU Relevance ≥ 70, your target category.
- Sort by Velocity descending. Open the top 10 in new tabs.
- Run the seven-signal scan on each. Add survivors to your shortlist.
- Set a personal threshold: never launch a product with Winning Score under 75. The hit-rate math doesn't support it.
Next lesson: the live walkthrough — building a 20-product shortlist in under an hour using Majorka, AliExpress, TikTok and Meta Ad Library together. The full research workflow, end to end.
Sources
- Majorka Winning Score methodology — server/lib/winningScore.ts
- Composite scoring vs single-metric ranking — academic e-commerce research, Manning et al 2017
- AliExpress catalogue size — alibaba.com investor data 2024
Module 02 — Product Research — Finding Winners Before Everyone Else
The hardest skill in this business. Data-driven frameworks for spotting products at the beginning of their curve, not the end.
Lessons in this module
- The 4 Types of Winning Products (and which you should pick) · 11 min
Problem-solvers, wow-factor, impulse, evergreen — the trade-offs of each. - Trend Velocity — Catching a Winner at Day 10, Not Day 60 · 13 min
How to read a velocity curve and when to pounce. - AliExpress Signals That Actually Matter · 9 min
Ignore reviews. Watch orders, store age, and "recently ordered" pulse. - TikTok Search for Product Discovery (the right way) · 10 min
The search strings that surface rising products, not viral replays. - Meta Ad Library — Reverse-Engineering Competitor Winners · 12 min
How to tell a test from a scale, and steal the ad angle without the copy. - The Majorka Winning Score Explained (this lesson) · 8 min
What goes into the score, why it beats raw order counts, how to use it. - Building a 20-Product Shortlist in Under an Hour · 15 min
Live walkthrough: from dashboard to validated shortlist, fast.