How our AI color analysis works
The three signals a colorist reads, now read by vision AI — with the methodology, sources, and limits out in the open.
The three signals
Personal color analysis is not mystical — it comes down to three measurable qualities. We cover each in depth in the depth, undertone and contrast guide:
- Depth — the combined value of hair, skin, and eyes. Are you overall light, medium, or deep?
- Undertone — the warmth of your skin, confirmed by eye and hair warmth.
- Contrast — the value gap between your hair, skin, and eyes.
Those three dimensions place you on the 12-season grid. Read them correctly and there's exactly one answer.
Our methodology (what the model actually does)
When you upload a selfie, the image is stored briefly on a private CDN and passed to Anthropic's Claude Sonnet 4.6 vision model with a structured prompt derived from the Sci\ART methodology. The prompt asks the model to judge depth, undertone, and contrast independently — the same three judgments a trained colorist makes before picking a palette — then to match the combination to one of 12 seasons and return a confidence score.
The prompt is deliberate about what to reason over:
- Compute depth from the combined value of hair, skin, and eyes together — not any single feature in isolation.
- Identify undertone from the skin, then cross-check it against eye warmth and hair warmth. Disagreement signals a neutral/mixed undertone.
- Measure contrast as the value gap between the three features, not how "high impact" the face looks.
- Pick the one season that satisfies all three readings. If two seasons are close, return the lower confidence.
The model returns JSON with its reading of each axis, the final season slug, and 2–3 sentences of reader-friendly reasoning. That's what populates your report.
What the AI can and can't do
It's fast, consistent, and free. It reads skin, hair, and eye color from a good photo about as well as most enthusiasts — and catches obvious errors like "I thought I was a Winter but my skin is warm-neutral" reliably.
It's not as good as a professional in-person draping session, where a colorist watches your face change under real fabric. That's the gold standard — physical colorimetry in natural light. The AI sees a single 2D photo and has to infer; a live draping captures dimensional subsurface scattering and real-time facial reaction to different colors. For 95% of people the AI gets the season right on the first try; the remaining 5% benefit from a physical session.
For the best result
- Take the photo in natural daylight, not under warm indoor bulbs or blue LED ring lights.
- Skip makeup, or keep it minimal and neutral.
- Skip filters, skin-smoothing, and retouching apps — the AI needs your true tones, not your best angle.
- Face the camera, both eyes visible, shoulders-up, no sunglasses.
- If your hair is dyed, include it — but know the AI is primarily reading your face.
Sources & credit
The 12-season framework is not our invention — it's the standard modern system used by professional colorists. We build on:
- Suzanne Caygill, Color: The Essence of You (1980). The original codification of personal color seasons in American style.
- Carole Jackson, Color Me Beautiful (1980). The book that took the four-season system mainstream.
- Kathryn Kalisz (Sci\ART) and subsequent colorists (Christine Scaman, Alyson Simpson) for the modern 12-season expansion that adds depth and contrast axes.
- The broader literature on personal color analysis and skin-undertone colorimetry. For a general-interest overview, Wikipedia's personal color analysis article is a reasonable starting point.
Privacy & data
Your photo is stored privately for up to 24 hours to generate your report, then deleted. We don't sell your data and we don't train models on your image. Full details on the privacy page.
Ready to find your season?
Upload a selfie →