How We See Beauty in Gifts
gifts.supply is an AI-powered analytics platform for Telegram Gift NFTs. We don't just track rarity — we measure visual harmony, semantic meaning, and color science to understand what makes a gift truly special.
Scoring Philosophy
Most NFT platforms rank items by rarity alone. We believe visual harmony matters more for collectibles — that's why aesthetic score carries 62% of the final Collector Score.
collector_score = 0.23 × rarity + 0.62 × aesthetic + 0.15 × serial
Rarity is weighted: 65% model + 15% backdrop + 20% symbol. Serial score rewards both low numbers (rank 30%) and beautiful patterns (beauty 70%) — palindromes, meme numbers (69, 420, 1337), repeating digits.
Compatibility Engine
The aesthetic score measures how well a gift's components fit together. Three compatibility pipelines run across every possible attribute pair:
aesthetic = 0.56 × symbol↔model + 0.18 × backdrop↔model + 0.26 × collection↔symbol
Symbol ↔ Model Compatibility (56%)
The primary compatibility — does the symbol make sense with the model? Four AI dimensions are blended:
symbol_model = 0.45 × semantic + 0.28 × visual + 0.10 × strict + 0.17 × cross
- Semantic (45%) — Text embedding similarity via BGE-M3 (1024d). Do they mean similar things?
- Visual (28%) — Image embedding similarity via DINOv2 ViT-L/14 (1024d). Do they look alike?
- Strict (10%) — Multiplicative: semantic × visual. High only when both agree.
- Cross-modal (17%) — SigLIP (1152d): can the AI recognize the model from the symbol's image, and vice versa?
Backdrop ↔ Model Compatibility (18%)
Backdrops are radial gradients — no meaningful image embeddings. We compare them semantically only (BGE-M3 text embeddings). Separately, we compute color separation in dual color space (70% HSV + 30% OKLCH) for the monochrome/contrast achievement system.
Collection ↔ Symbol Compatibility (26%)
Does this symbol fit the collection's theme? Semantic-heavy:
coll_symbol = 0.60 × semantic + 0.20 × visual + 0.15 × strict + 0.05 × cross
All raw cosine similarities are percentile-scaled (p5→0, p95→1) to normalize distributions across entities. Computed in PostgreSQL via pgvector with HNSW indexes (m=16, ef=64).
Color Science
Every model and backdrop image undergoes pixel-level color analysis in multiple color spaces:
Dominant Color Extraction
OKLab k-means clustering (5 clusters, 10K pixel samples, 32 iterations) extracts the dominant color palette. Results stored as OKLCH (Lightness, Chroma, Hue) and HSV.
Monochrome Detection
A 5-stage algorithm in OKLab (perceptually uniform color space) determines whether an image is monochromatic:
- Gray gate — if p95(chroma) < 0.03, classify as near-grayscale
- Hue concentration (R) — circular mean of (a, b) vectors; R→1 = single hue dominates, R→0 = scattered
- Perpendicular spread — p95 deviation from dominant hue axis (threshold: 0.03)
- Outlier fraction — max 2% pixels may strongly deviate
- Final verdict — monochromatic if all checks pass
Color Separation
For backdrop ↔ model pairs, we measure how distinguishable the model is against its backdrop. Dual-space blend: 70% HSV (hue 50%, saturation 25%, value 25%) + 30% OKLCH with chroma-gated hue distance (low-chroma colors have unstable hue → gate at C=0.08).
Visual Weight & Rhythm
Rule-based tags from OKLCH values: weight (light/medium/heavy) from 0.5×(1-L) + 0.3×(C/0.15) + 0.2×contrast, and rhythm (calm/dynamic/chaotic) from RMS image contrast thresholds.
Semantic Tagging
Every attribute (model, symbol, backdrop) receives automatic semantic tags derived from LLM-generated keywords + color analysis:
Mood and style use keyword-matching against 150+ trigger words per category, with LLM category fallback (e.g., "animal" → playful, "celestial" → serene + mysterious). Tags power achievements like Mood Curator (80%+ dominant mood) and Chaotic Energy (7+ distinct moods).
3-Layer Search Pipeline
Searching "永遠の炎" (Japanese for "eternal flame") triggers a cascading pipeline:
Entity relevance = max(similarity) across all layers. Gift scoring weights matched attributes: collection +0.03, model +0.04, symbol +0.01, backdrop +0.005. Results cached in Redis (5-min TTL) for instant repeat queries.
AI Model Stack
Total: ~4.8B parameters across 6 models. SigLIP, BGE-M3, and MADLAD-400 run in a dedicated semantic-encoder microservice (7GB RAM, 4 CPU). All vector operations use pgvector HNSW indexes in PostgreSQL.
Achievement System
35+ achievements across 8 categories, many with repeatable instances and diminishing-return scaling (log2, sqrt):
Monochrome/Contrast Lord and God tiers are loss-based — owning a single non-qualifying gift revokes the achievement. This creates meaningful commitment to a collecting strategy.
Infrastructure
Tech Stack
▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒███ ▒████ ▒████ ▒███ ▒███ ▒███ ▒███ ▒███ ▒█████▒█████ ▒███ ▒███ ▒███ ▒████ ▒████ ▒███████████ ▒███ ▒███ ▒███ ▒███████████ ▒███████████ ▒███ ▒███ ▒███ ▒██████████ ▒███████████ ▒███ ▒███ ▒███ ▒█████████ ▒███▒███▒███ ▒█████████████ ▒███ ▒███ ▒█ ▒███ ▒█████████████ ▒███████████ ▒███ ▒███ ▒█████████████ ▒███████████ ▒███ ▒███ ▒█████████████ ▒██████████ ▒███ ▒███
Ein Produkt des Kollektivs Schum