Skip to content

How We See Beauty in Gifts

gifts.supply is an AI-powered analytics platform for Telegram Gift NFTs. We don't just track rarity — we measure visual harmony, semantic meaning, and color science to understand what makes a gift truly special.

6
AI Models
~4.8B
Parameters
400+
Languages
3
Search Layers
35+
Achievements
42
Services

Scoring Philosophy

Most NFT platforms rank items by rarity alone. We believe visual harmony matters more for collectibles — that's why aesthetic score carries 62% of the final Collector Score.

collector_score = 0.23 × rarity + 0.62 × aesthetic + 0.15 × serial
23%
Rarity
How rare are the attributes
62%
Aesthetic
Visual harmony of the combo
15%
Serial
Number beauty & rank

Rarity is weighted: 65% model + 15% backdrop + 20% symbol. Serial score rewards both low numbers (rank 30%) and beautiful patterns (beauty 70%) — palindromes, meme numbers (69, 420, 1337), repeating digits.

Compatibility Engine

The aesthetic score measures how well a gift's components fit together. Three compatibility pipelines run across every possible attribute pair:

aesthetic = 0.56 × symbol↔model + 0.18 × backdrop↔model + 0.26 × collection↔symbol

Symbol ↔ Model Compatibility (56%)

The primary compatibility — does the symbol make sense with the model? Four AI dimensions are blended:

symbol_model = 0.45 × semantic + 0.28 × visual + 0.10 × strict + 0.17 × cross
  • Semantic (45%) — Text embedding similarity via BGE-M3 (1024d). Do they mean similar things?
  • Visual (28%) — Image embedding similarity via DINOv2 ViT-L/14 (1024d). Do they look alike?
  • Strict (10%) — Multiplicative: semantic × visual. High only when both agree.
  • Cross-modal (17%) — SigLIP (1152d): can the AI recognize the model from the symbol's image, and vice versa?

Backdrop ↔ Model Compatibility (18%)

Backdrops are radial gradients — no meaningful image embeddings. We compare them semantically only (BGE-M3 text embeddings). Separately, we compute color separation in dual color space (70% HSV + 30% OKLCH) for the monochrome/contrast achievement system.

Collection ↔ Symbol Compatibility (26%)

Does this symbol fit the collection's theme? Semantic-heavy:

coll_symbol = 0.60 × semantic + 0.20 × visual + 0.15 × strict + 0.05 × cross

All raw cosine similarities are percentile-scaled (p5→0, p95→1) to normalize distributions across entities. Computed in PostgreSQL via pgvector with HNSW indexes (m=16, ef=64).

Color Science

Every model and backdrop image undergoes pixel-level color analysis in multiple color spaces:

Dominant Color Extraction

OKLab k-means clustering (5 clusters, 10K pixel samples, 32 iterations) extracts the dominant color palette. Results stored as OKLCH (Lightness, Chroma, Hue) and HSV.

Monochrome Detection

A 5-stage algorithm in OKLab (perceptually uniform color space) determines whether an image is monochromatic:

  1. Gray gate — if p95(chroma) < 0.03, classify as near-grayscale
  2. Hue concentration (R) — circular mean of (a, b) vectors; R→1 = single hue dominates, R→0 = scattered
  3. Perpendicular spread — p95 deviation from dominant hue axis (threshold: 0.03)
  4. Outlier fraction — max 2% pixels may strongly deviate
  5. Final verdict — monochromatic if all checks pass

Color Separation

For backdrop ↔ model pairs, we measure how distinguishable the model is against its backdrop. Dual-space blend: 70% HSV (hue 50%, saturation 25%, value 25%) + 30% OKLCH with chroma-gated hue distance (low-chroma colors have unstable hue → gate at C=0.08).

Visual Weight & Rhythm

Rule-based tags from OKLCH values: weight (light/medium/heavy) from 0.5×(1-L) + 0.3×(C/0.15) + 0.2×contrast, and rhythm (calm/dynamic/chaotic) from RMS image contrast thresholds.

Semantic Tagging

Every attribute (model, symbol, backdrop) receives automatic semantic tags derived from LLM-generated keywords + color analysis:

Mood (8 values)
cheerful · sad · romantic · mysterious · playful · serene · intense · neutral
Style (8 values)
strict · naive · gloomy · playful · minimalist · elaborate · retro · modern
Weight (3 values)
light · medium · heavy
From OKLCH lightness + chroma
Rhythm (3 values)
calm · dynamic · chaotic
From RMS image contrast

Mood and style use keyword-matching against 150+ trigger words per category, with LLM category fallback (e.g., "animal" → playful, "celestial" → serene + mysterious). Tags power achievements like Mood Curator (80%+ dominant mood) and Chaotic Energy (7+ distinct moods).

Searching "永遠の炎" (Japanese for "eternal flame") triggers a cascading pipeline:

0
Translation
MADLAD-400 (3B params, Google Research, CTranslate2 int8) translates 400+ languages → English. "永遠の炎" → "eternal flame"
1
Regex Entity Matching
Word-boundary regex across name, slug, theme, artist, LLM keywords, text metadata. Differentiated scoring: name/slug (1.0) → artist/theme (0.95) → keywords (0.90) → text_meta (0.85).
2
Semantic HNSW Search
BGE-M3 (1024d) text embedding → cosine similarity against models, symbols, collections. HNSW index (m=16, ef=64). Threshold: 0.55.
3
Cross-Modal SigLIP Search
SigLIP So400m/14 (1152d) text → image space. Finds models and symbols whose images match the query text. Threshold: 0.50.

Entity relevance = max(similarity) across all layers. Gift scoring weights matched attributes: collection +0.03, model +0.04, symbol +0.01, backdrop +0.005. Results cached in Redis (5-min TTL) for instant repeat queries.

AI Model Stack

SigLIP So400m/14
400M params · 1152d vectors
Cross-modal text↔image search. Layer 3 of the search pipeline and cross-modal compatibility scoring.
BGE-M3 (multilingual-e5-base)
330M params · 1024d vectors
Semantic text search. Layer 2 search + semantic compatibility between entities.
MADLAD-400-3B-mt
3B params · CTranslate2 int8
Multilingual translation (400+ languages → English) for search queries.
DINOv2 ViT-L/14
300M params · 1024d vectors
Visual feature extraction. Powers visual compatibility scoring between entities.
CLIP ViT-L-14
400M params · 768d vectors
Legacy image embeddings for color-aware image analysis and aesthetic scoring.
ALS Collaborative Filtering
Custom · 64d vectors
Taste profile embeddings for personalized discovery feed recommendations.

Total: ~4.8B parameters across 6 models. SigLIP, BGE-M3, and MADLAD-400 run in a dedicated semantic-encoder microservice (7GB RAM, 4 CPU). All vector operations use pgvector HNSW indexes in PostgreSQL.

Achievement System

35+ achievements across 8 categories, many with repeatable instances and diminishing-return scaling (log2, sqrt):

Size & Diversity
Whale (500), Leviathan (1000+), World Tour (50 collections)
Rarity
Rare (p95), Epic (p99), Mythic (top 10), repeatable
Monochrome
5-tier: Lover → Master → Beast → Lord → God. Lord/God can be lost
Contrast
5-tier same as monochrome. Based on color separation score
Serial Numbers
Genesis (#1), Palindrome, Meme Lord, repeatable
Compatibility
Perfect Match (≥0.95), Triple Harmony (all ≥0.90)
Color Spectrum
Rainbow (6 hues), Full Spectrum (12), Pastel Dream, Neon Rush
Inscriptions
Love Collection, Funny Bone, Storyteller (200+ chars)

Monochrome/Contrast Lord and God tiers are loss-based — owning a single non-qualifying gift revokes the achievement. This creates meaningful commitment to a collecting strategy.

Infrastructure

153+
API Endpoints
55
Frontend Pages
44
Database Tables
19
Background Workers
55
SQL Migrations
~46K
Lines of Code

Tech Stack

Next.js 15React 19FastAPIPython 3.11PostgreSQL 15pgvectorRedis 7ClickHouseDockerMinIOTON BlockchainTelegram MTProtoSigLIPBGE-M3DINOv2MADLAD-400CTranslate2OKLab/OKLCHHNSWTailwind CSS
▒███ ▒███ ▒███ ▒███    ▒███ ▒███    ▒███
▒███ ▒███ ▒███ ▒███    ▒███ ▒████  ▒████
▒███ ▒███ ▒███ ▒███    ▒███ ▒█████▒█████
▒███ ▒███ ▒███ ▒████  ▒████ ▒███████████
▒███ ▒███ ▒███ ▒███████████ ▒███████████
▒███ ▒███ ▒███  ▒██████████ ▒███████████
▒███ ▒███ ▒███   ▒█████████ ▒███▒███▒███
▒█████████████         ▒███ ▒███ ▒█ ▒███
▒█████████████ ▒███████████ ▒███    ▒███
▒█████████████ ▒███████████ ▒███    ▒███
▒█████████████ ▒██████████  ▒███    ▒███

Ein Produkt des Kollektivs Schum