How We See Beauty in Gifts
gifts.supply is an AI-powered analytics platform for Telegram Gift NFTs. We don't just track rarity โ we measure visual harmony, semantic meaning, and color science to understand what makes a gift truly special.
Scoring Philosophy
Most NFT platforms rank items by rarity alone. We believe visual harmony matters more for collectibles โ that's why aesthetic score carries 62% of the final Collector Score.
collector_score = 0.23 ร rarity + 0.62 ร aesthetic + 0.15 ร serial
Rarity is weighted: 65% model + 15% backdrop + 20% symbol. Serial score rewards both low numbers (rank 30%) and beautiful patterns (beauty 70%) โ palindromes, meme numbers (69, 420, 1337), repeating digits.
Compatibility Engine
The aesthetic score measures how well a gift's components fit together. Three compatibility pipelines run across every possible attribute pair:
aesthetic = 0.56 ร symbolโmodel + 0.18 ร backdropโmodel + 0.26 ร collectionโsymbol
Symbol โ Model Compatibility (56%)
The primary compatibility โ does the symbol make sense with the model? Four AI dimensions are blended:
symbol_model = 0.45 ร semantic + 0.28 ร visual + 0.10 ร strict + 0.17 ร cross
- Semantic (45%) โ Text embedding similarity via BGE-M3 (1024d). Do they mean similar things?
- Visual (28%) โ Image embedding similarity via DINOv2 ViT-L/14 (1024d). Do they look alike?
- Strict (10%) โ Multiplicative: semantic ร visual. High only when both agree.
- Cross-modal (17%) โ SigLIP (1152d): can the AI recognize the model from the symbol's image, and vice versa?
Backdrop โ Model Compatibility (18%)
Backdrops are radial gradients โ no meaningful image embeddings. We compare them semantically only (BGE-M3 text embeddings). Separately, we compute color separation in dual color space (70% HSV + 30% OKLCH) for the monochrome/contrast achievement system.
Collection โ Symbol Compatibility (26%)
Does this symbol fit the collection's theme? Semantic-heavy:
coll_symbol = 0.60 ร semantic + 0.20 ร visual + 0.15 ร strict + 0.05 ร cross
All raw cosine similarities are percentile-scaled (p5โ0, p95โ1) to normalize distributions across entities. Computed in PostgreSQL via pgvector with HNSW indexes (m=16, ef=64).
Color Science
Every model and backdrop image undergoes pixel-level color analysis in multiple color spaces:
Dominant Color Extraction
OKLab k-means clustering (5 clusters, 10K pixel samples, 32 iterations) extracts the dominant color palette. Results stored as OKLCH (Lightness, Chroma, Hue) and HSV.
Monochrome Detection
A 5-stage algorithm in OKLab (perceptually uniform color space) determines whether an image is monochromatic:
- Gray gate โ if p95(chroma) < 0.03, classify as near-grayscale
- Hue concentration (R) โ circular mean of (a, b) vectors; Rโ1 = single hue dominates, Rโ0 = scattered
- Perpendicular spread โ p95 deviation from dominant hue axis (threshold: 0.03)
- Outlier fraction โ max 2% pixels may strongly deviate
- Final verdict โ monochromatic if all checks pass
Color Separation
For backdrop โ model pairs, we measure how distinguishable the model is against its backdrop. Dual-space blend: 70% HSV (hue 50%, saturation 25%, value 25%) + 30% OKLCH with chroma-gated hue distance (low-chroma colors have unstable hue โ gate at C=0.08).
Visual Weight & Rhythm
Rule-based tags from OKLCH values: weight (light/medium/heavy) from 0.5ร(1-L) + 0.3ร(C/0.15) + 0.2รcontrast, and rhythm (calm/dynamic/chaotic) from RMS image contrast thresholds.
Semantic Tagging
Every attribute (model, symbol, backdrop) receives automatic semantic tags derived from LLM-generated keywords + color analysis:
Mood and style use keyword-matching against 150+ trigger words per category, with LLM category fallback (e.g., "animal" โ playful, "celestial" โ serene + mysterious). Tags power achievements like Mood Curator (80%+ dominant mood) and Chaotic Energy (7+ distinct moods).
3-Layer Search Pipeline
Searching "ๆฐธ้ ใฎ็" (Japanese for "eternal flame") triggers a cascading pipeline:
Entity relevance = max(similarity) across all layers. Gift scoring weights matched attributes: collection +0.03, model +0.04, symbol +0.01, backdrop +0.005. Results cached in Redis (5-min TTL) for instant repeat queries.
AI Model Stack
Total: ~4.8B parameters across 6 models. SigLIP, BGE-M3, and MADLAD-400 run in a dedicated semantic-encoder microservice (7GB RAM, 4 CPU). All vector operations use pgvector HNSW indexes in PostgreSQL.
Achievement System
35+ achievements across 8 categories, many with repeatable instances and diminishing-return scaling (log2, sqrt):
Monochrome/Contrast Lord and God tiers are loss-based โ owning a single non-qualifying gift revokes the achievement. This creates meaningful commitment to a collecting strategy.
Infrastructure
Tech Stack
โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโโ โโโโโ โโโโ โโโโ โโโโ โโโโ โโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโ โโโโโ โโโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโ โโโโโโโโโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโ โโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโโโ โโโโ โโโโ โโ โโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโ โโโโ โโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโ โโโโ โโโโ
Ein Produkt des Kollektivs Schum