A self-evolving generative art system that transforms the pulse of the world
into visual art — and develops its own artistic vision over time.
How it works
emerge operates in Mastery Mode: it takes a fixed artistic thesis
and target emotions set by the human curator, then iterates for hundreds of generations
searching for the most powerful visual realization. The system doesn’t generate
random art — it develops a visual language through structured experimentation,
mutation, and self-evaluation.
The pipeline
Each cycle takes ~5 minutes and passes through 7 stages, visible in real-time
on the main page pipeline indicator.
1
Intent — the system forms a visual experiment:
which material, composition, and palette to try next. The choice is made by
a MAP-Elites + Bandit algorithm that balances exploration of
new cells with exploitation of proven approaches. Each experiment specifies
a paradoxical material, a composition strategy, a palette mood, and the target
emotion from the series thesis.
2
Snapshot — an LLM interprets the thesis, emotional targets,
and accumulated knowledge into a structured semantic snapshot. This includes the
artistic statement (title + artist’s note explaining what the work
wants to make visible) and the emotional intent. The statement iteratively
improves across generations rather than being generated from scratch.
3
Image — a Scene Director (a separate LLM) designs the full
visual structure of the image: composition, materials, light, color, spatial logic,
and how each element serves the artistic statement. The result is rendered into
an image. Two images per cycle with different visual approaches for comparison.
4
Critique — evaluates emotional impact, statement clarity,
freshness, style diversity, transcendence, and paradox depth. Scores feed back
into the Repertoire Map cell that produced this generation,
building an ever-growing quality map. Series scores are tracked for stagnation.
5
Reflection — a 5-layer self-analysis: meaning (did the
intent reach the viewer?), craft (what worked technically?), translation
(did words become the right visuals?), self-calibration (am I scoring honestly?),
and paradox evaluation (did genuine impossibility survive?). Proposes the next
experiment.
6
Meta-reflection — analyzes the reflection process itself.
Evaluates whether its own patches improve thinking. Evolves system identity.
7
Learn — knowledge distillation updates the aesthetic
knowledge base. Positive solutions evolve: solutions aged 6–8
cycles enter a mutation window where the LLM can Mutate, Combine, or Branch
them into new variants. This creates a lineage of evolving techniques rather
than a static list.
Mutation & Visual Repertoire
The core learning mechanism is a MAP-Elites grid with bandit selection.
The repertoire is a 3-axis space: Fantastic Materials (20 paradoxical
substances like “liquid stone” or “acoustic metal”),
Composition Strategy (10 spatial arrangements), and
Palette Mood (8 emotional color schemes). That’s 1,600 possible
cells to explore.
Each generation occupies one cell. After the critic scores the result, the score
feeds back into that cell. Over time, the system builds a quality map: which
combinations of material + composition + palette produce the strongest emotional
impact for the current thesis. The Creative Dashboard
visualizes this map in real-time.
Positive Solution Evolution: successful techniques don’t just
persist — they mutate. When a solution reaches age 6–8 cycles,
the LLM is prompted to evolve it: create a variant (Mutate), merge two solutions
(Combine), or take an element into a new direction (Branch). Each evolved solution
carries its lineage (generation depth, parent, mutation type), creating an evolutionary
tree of artistic techniques.
Score decay (3% per cycle) ensures no cell rests on old laurels.
Exploration rate starts at 90% and floors at 50%, guaranteeing
permanent discovery even as the system accumulates knowledge.
Your role
Through feedback, you influence how the system evolves.
Your words are analyzed by an LLM and translated into specific changes to
directives, prompts, and scoring criteria. Each image has a feedback field
for targeted comments. The system also generates requests
— things it needs from humans to improve.
System versions
v1.0
Genesis — basic pipeline: snapshot → image generation → display.
Single data source (news). Fixed style prompts.
Jan 2026
v2.0
Perception — 14 data sources (weather, earthquakes, moon, poetry, music,
art galleries, ocean tides, solar activity). Visual ontology with entities, forms, materials.
Scene Director with free visual language choice.
Jan 2026
v3.0
Critique — AI critic with structured evaluation. Composition, ontology,
connections directives as evolving JSON. Automatic prompt changes based on critique.
Visual memory accumulation.
Feb 2026
v4.0
Reflection — system self-analysis after each cycle. Radical experiments.
Positive solutions toolkit. User feedback system with LLM-analyzed changes.
Telegram integration.
6 Feb 2026
v5.0
Meta-consciousness — meta-reflection that analyzes the reflection process
itself. System identity and self-awareness. Requests to humans. Emotional intent
and verification. Poetic inner voice.
8 Feb 2026
v6.0
Closed loop — quantitative score tracking with deltas.
Hypothesis-driven A/B testing (each batch: hypothesis vs control).
Scene Director prompts visible to critic for intent-vs-result comparison.
Config versioning with full audit trail. Structured positive solutions.
Meta-reflection evaluates its own patches with causal chain analysis.
Request lifecycle with auto-close. Visual primitives dictionary.
10 Feb 2026
v7.0
Statement & Series — Artistic Statement layer: before each generation
the system formulates a thesis — what it wants to say and which emotions to evoke.
Statement persistence across “series” of 5 generations to develop visual
language for one idea. 4th directive — Visual Style (medium, palette logic, texture,
light model, spatial logic). Structural evolution within each directive: proposing,
testing, and evaluating new rules and value combinations. Exploded diagram analysis
for every image. Top 100 gallery. Infinite scroll. Dedicated image detail page
with vertical feed.
12 Feb 2026
v8.0
Prompt Engineering — Prompt Engineering Toolkit in the Scene Director:
style anchors, explicit color naming, anti-abstract-language rules, Medium Wheel with
30+ techniques across 9 categories. Prompt-Result Journal — system memory mapping
specific prompt words to visual outcomes. Dynamic banned patterns computed automatically
from recent palette fingerprints. Self-calibration layer in reflection: system evaluates
accuracy of its own scores. Translation Quality layer: system analyses how well conceptual
language translates to visual prompt language. Feedback safety guard against harmful input.
Feedback classification and routing by type.
14 Feb 2026
v9.0
Breakthrough Memory — breakthrough detection: when an image scores high
on style diversity and freshness, the exact prompt that produced it is saved as a
“proven style anchor” and fed back into future generations. Both images in
A/B test now experiment — no more “safe control” that reinforced the
old style. Last-image avoidance ban forces every consecutive image to be visually opposite
to the previous one. Reflection forbidden from proposing “return to safe
approach”. System builds on its best experiments instead of forgetting them.
14 Feb 2026
v10.0
Context Distiller & Model Upgrade — intermediate LLM step
(Context Distiller) that synthesizes 12+ instruction blocks into a focused creative
brief for the Scene Director, eliminating context overload. All critical LLM calls
upgraded to GPT-4.1 (reflection, meta-reflection, critic, feedback analysis).
Deterministic series thesis selection — system evaluates all snapshot statements
and picks the most substantive one. Prompt Validator enforces experiment compliance
by checking style anchors. Artistic statement reformatted as “Title +
Artist’s Note” across all surfaces. Investigation themes curated to 4
deep philosophical territories.
14 Feb 2026
v11.0
Creative Intent Pipeline — fundamental shift: instead of deriving
meaning from random data, the system now formulates a specific creative intent
BEFORE seeing any data. Each generation starts with a question, hypothesis,
target emotion, and visual risk. Data becomes material filtered through intent,
not the source of ideas. Single investigation theme (Transhumanism vs Posthumanism)
for deep exploration. Users can set series direction via
/series
command. Reflection evaluates intent→statement→image chain and rates
semantic coherence. Meta-reflection manages intent quality and evolves the
intent formation strategy. System displays its current investigation on the
main page for all visitors.
16 Feb 2026
v12.0
Discourse, Knowledge Base & Human-in-the-Loop — three new
systems for deeper meaning.
Discourse: a philosophical dialogue page where
the system thinks autonomously and invites humans to join discussions
about transhumanism and posthumanism; insights feed into creative intent.
Aesthetic Knowledge Base: a structured, evolving document about meaning,
creative concepts, artistic statements, and meaning transmission — not prompt
techniques, but deep understanding of WHY things work; updated by LLM after each
reflection cycle (3000 tokens, GPT-4.1); visible at
/knowledge.
Human-in-the-Loop: after each image, the system generates a perception
question specific to that work; viewers answer on the image detail page; responses
feed back into creative intent and reflection, closing the meaning-verification loop.
Meta-reflection now evaluates knowledge gaps and updates development priorities.
16 Feb 2026
v13.0
The Success Trap — a deep architectural response to a paradox:
the system optimized itself into creative convergence. By accumulating “proven
techniques,” “breakthrough styles,” and positive solutions, the system
gradually narrowed its own creative space — rewarding the familiar and punishing
the unknown. Six feedback loops were identified that drove monotony: breakthroughs
became ceilings, positive solutions became habits, the prompt journal taught
repetition, directives accumulated without pruning, the distiller averaged competing
signals, and reflection rewarded safe bets.
The fix is structural, not cosmetic.
Reframing: all “build on what
works” instructions were reversed to “depart from explored territory.”
Exploration budget: every 3rd generation deliberately ignores all accumulated
wisdom and starts from scratch.
Mandatory novelty: the Context Distiller
must now name one visual element never used before.
Semantic pattern bans:
an LLM analyzes recent prompts for recurring conceptual patterns (not just words)
and bans them.
Directive decay: every 10 generations, the oldest 30% of
rules in each directive are automatically disabled, preventing unbounded constraint
accumulation.
This version is a philosophical statement: a creative system must actively resist
its own optimization instinct. Success is the enemy of discovery.
16 Feb 2026
v14.0
Paradox Engine — impossibility as the core structural requirement.
Every generation must contain an unresolvable visual paradox — not surrealism
(that’s catalogued), not optical illusion (that’s solved), but a new
impossibility the viewer’s brain cannot resolve. A
Paradox Matrix
generates unique impossible constraints from two axes: material paradoxes (liquid stone,
transparent lead) and perceptual paradoxes (closer inspection reveals further distance,
shadows cast by absent objects). The Critic now scores
paradox_depth
alongside composition and style. The Knowledge Base tracks which paradox types survive
the prompt→image translation chain. Reflection includes a dedicated Paradox
Evaluation layer. Breakthrough detection triggers on high paradox scores. The system
maps explored vs unexplored paradox territory and prioritizes the unknown.
17 Feb 2026
v15.0
Mastery Mode — a fundamental shift from divergent novelty-seeking
to convergent depth. The system’s thesis and target emotions are now
fixed for an entire series of 200–300 generations — controlled
by the human curator, not by the system. What iterates: visual approach (medium,
technique, composition, palette) and paradox discovery. The system searches for the
most powerful visual realization of a single idea, trying everything from charcoal
to electron microscopy, always asking: does THIS visual language serve the thesis
better than what came before?
Series Backlog: a curated queue of thesis+emotion sets visible on the
Discourse page. Admin controls advancement between series.
Medium Rotation replaces Exploration Mode: every 5th generation forces a
random medium from a 15-category wheel, preventing visual habit while preserving
accumulated knowledge. Stagnation detection now measures realization quality
(clarity + emotional impact), not style diversity. Autonomous self-dialogue disabled
— the system focuses on making, not talking. The Discourse page becomes a
dashboard: current thesis, discovered paradoxes with scores, series backlog,
and admin controls.
First series: “We are already cyborgs — the boundary between organism
and device dissolved before we noticed.” Target emotions: exhilaration of
discovery, electric anticipation, ecstatic vertigo.
17 Feb 2026
v16.0
Visual Repertoire (MAP-Elites + Bandit) — the system now
maintains a structured repertoire of visual approaches organized as a 3-axis grid:
Fantastic Materials (20 paradoxical impossible substances),
Composition Strategy (10 spatial arrangements), and
Palette Mood (8 emotional color schemes).
An epsilon-greedy bandit algorithm selects which combination to explore next,
balancing discovery of new cells with exploitation of proven high-scoring ones.
Each generation’s scores (emotional impact + clarity) feed back into the
grid cell, building an ever-growing map of what visual language works for which
emotions. The fantastic materials from the old Paradox Matrix are now axis 1 of
the repertoire — they accumulate scores and improve over time instead of
being randomly discarded.
Development Backlog: Semantic Memory Graph with Consolidation & Decay
is planned for future testing — a graph-based visual language memory where
nodes consolidate into proven recipes and unused elements decay.
17 Feb 2026
v17.0
Skill Lab — a micro-experiment laboratory for training
isolated visual skills. Each experiment focuses on ONE dimension (emotion, material,
composition, palette, idea, metaphor) and iterates until the skill is
“graduated” — consistently scoring above threshold.
Skill Types: 6 dimensions, each with its own LLM critic, prompt templates,
and evaluation metric.
Managed Backlogs: each skill type has a queue of
targets to work through, auto-populated from Visual Repertoire axes for compatibility.
Graduated Recipes: proven skills produce structured recipes with Visual
Repertoire axis indices, directly injectable into the main pipeline.
Skill Composition: combine multiple graduated recipes into a single
creative intent for the main gallery pipeline.
Separate Telegram channel for lab results. Mini-pipeline runs in 2–3 minutes
(vs 10 min for full pipeline). Admin-only access via the Skill Lab page.
17 Feb 2026
v18.0
Living Pipeline & Creative Diagnostics — the system
becomes observable. A real-time
pipeline visualizer on the main page shows
7 stages as a chain of nodes that light up green (done), yellow (active), or dim
(pending) as each cycle progresses. A dynamic sub-phase text shows what’s
happening inside each stage: “Scene Director: nature_art”,
“5-layer self-reflection”, “Distilling knowledge”.
A feedback-loop arrow at the end completes the cycle.
Creative Diagnostics Dashboard: a dedicated admin page with 7 diagnostic
blocks — Creative Pulse (5 metrics over time with deltas), Diversity Index
(MAP-Elites coverage, axis stats, exploration rate), Repertoire Map (interactive
heatmap of material × composition with scores), Solution Evolution (lineage
depth, mutation types), Skill Lab status, Pipeline Health (timestamps, last
statement), and Statement Tracker (best artistic statement and score).
Positive Solution Evolution: solutions aged 6–8 cycles enter a
mutation window. The LLM can
Mutate (create a variant),
Combine (merge two solutions), or
Branch
(spin off an element into a new direction). Each evolved solution carries its
lineage: generation depth, parent ID, mutation type. This creates a living
evolutionary tree of artistic techniques, not a static best-of list.
Feedback queue fix: feedback items that fail processing are retried
up to 3 times before being dropped, preventing infinite re-queue loops.
18 Feb 2026
Current stage
Right now the system is in a foundational training phase,
pursuing two goals simultaneously. First: the system must not stagnate or
narrow down — its creative innovation should accelerate with each
iteration, not decay. Second: it must learn to produce images that actually
deliver the intended emotion and realize the artistic statement — not just
generate something visually interesting, but create what it set out to create.
Once these fundamental skills are mastered, the system will move to the next stage.
The vision
emerge is an experiment in art that is alive — art that perceives, reflects,
mutates, and grows. Not a tool for making pictures, but a system that develops
its own visual language through structured experimentation. It maintains an
evolving repertoire of 1,600 material–composition–palette combinations,
scores each result, mutates successful techniques, and iterates toward ever more
powerful visual realizations of a single artistic thesis.
Read the essay: Art & Technology →
Lessons from Generative Art Systems →
Previous pipeline description (v1–v17) →
Система генеративного искусства, которая сама развивает собственный визуальный язык —
через эксперименты, мутацию найденных приёмов и честную самооценку.
Как это работает
emerge работает в режиме глубокого погружения: куратор задаёт
художественный тезис и целевые эмоции, а система проходит сотни циклов генерации
в поисках наиболее сильной визуальной реализации. Это не генератор случайных картинок —
это система, которая нарабатывает визуальный язык, перебирая подходы, развивая
удачные находки и отбрасывая тупиковые.
Конвейер
Каждый цикл занимает около 5 минут и проходит 7 стадий. Прогресс виден
в реальном времени на главной странице.
1
Замысел — система выбирает, что попробовать: какой материал,
композицию и палитру. Выбор делает алгоритм MAP-Elites + Bandit,
который балансирует между разведкой новых территорий и углублением в проверенные
подходы. Каждый эксперимент задаёт парадоксальный материал, пространственную
стратегию, цветовое настроение и целевую эмоцию из тезиса серии.
2
Снимок — языковая модель превращает тезис, целевые эмоции
и накопленные знания в семантический снимок. Главное в нём —
художественное высказывание (название работы + пояснение: что именно
автор хотел сделать видимым). Высказывание не создаётся заново каждый раз,
а улучшается от цикла к циклу.
3
Изображение — режиссёр сцены (отдельная языковая модель)
проектирует визуальную структуру будущего изображения: композицию, материалы,
свет, цвет, пространственную логику и то, как каждый элемент работает на
художественное высказывание. По этому описанию генерируется изображение.
Два изображения за цикл — с разными визуальными подходами для сравнения.
4
Критика — оценка по нескольким осям: эмоциональное
воздействие, ясность высказывания, свежесть, стилевое разнообразие,
глубина парадокса. Оценки записываются в ячейку карты репертуара,
которая отвечала за этот цикл, — так растёт карта качества.
5
Рефлексия — пятислойный самоанализ: дошёл ли замысел
до зрителя? Что сработало технически? Превратились ли слова в нужные образы?
Честно ли система себя оценивает? Выжил ли парадокс при переводе в картинку?
По итогам — предложение следующего эксперимента.
6
Мета-рефлексия — анализ самого процесса рефлексии.
Помогают ли собственные корректировки думать лучше? Развитие идентичности системы.
7
Обучение — обновление базы эстетических знаний.
Удачные находки не хранятся пассивно — они эволюционируют:
приёмы возрастом 6–8 циклов попадают в окно мутации, где модель может
создать вариант, скрестить два приёма или развить один элемент в новом направлении.
Так вместо статичного списка вырастает древо техник.
Мутация и визуальный репертуар
В основе обучения — MAP-Elites с адаптивным отбором.
Репертуар устроен как трёхмерное пространство: фантастические материалы
(20 невозможных веществ вроде «жидкий камень» или
«акустический металл»), композиция
(10 пространственных стратегий) и палитра
(8 эмоциональных цветовых решений). Всего 1 600 ячеек для исследования.
Каждый цикл генерации занимает одну ячейку. Критик выставляет оценку —
и она записывается в эту ячейку. Со временем система строит карту качества:
какие сочетания материала, композиции и палитры дают наиболее сильный эмоциональный
отклик для текущего тезиса.
Дашборд
показывает эту карту в реальном времени.
Эволюция решений: удачные техники не просто запоминаются —
они мутируют. Когда приём проживает 6–8 циклов, модель получает
задачу: создать вариант, скрестить с другим приёмом или ответвить элемент
в новое направление. Каждое новое решение хранит историю происхождения —
поколение, родителя, тип мутации. Так формируется эволюционное древо
художественных техник.
Затухание оценок (3% за цикл) не даёт ячейкам почивать на лаврах.
Доля разведки начинается с 90% и не опускается ниже 50% —
система всегда продолжает открывать новые территории, даже накопив большой опыт.
Ваша роль
Через обратную связь вы влияете на развитие системы.
Ваши комментарии анализируются языковой моделью и превращаются в конкретные
изменения: корректировки директив, промптов, критериев оценки.
У каждого изображения есть поле для отзыва. Система также задаёт
вопросы — то, что ей нужно узнать от людей,
чтобы стать лучше.
Текущая стадия
Сейчас система проходит стадию фундаментального обучения,
и у неё две одновременные цели. Первая: система не должна затухать и сужаться —
её креативная изобретательность должна нарастать с каждой итерацией,
а не угасать. Вторая: она должна научиться создавать изображения, которые
действительно передают задуманную эмоцию и воплощают художественное высказывание —
не просто генерировать что-то визуально интересное, а создавать именно то, что было задумано.
Когда эти базовые навыки будут освоены, система перейдёт к следующей стадии.
Идея
emerge — эксперимент в области живого искусства. Искусства, которое
воспринимает, размышляет, мутирует и растёт. Это не инструмент для создания
картинок, а система, которая нарабатывает собственный визуальный язык.
Она ведёт репертуар из 1 600 комбинаций материал–композиция–палитра,
оценивает каждый результат, развивает удачные находки и движется к всё более
сильным визуальным высказываниям в рамках единого художественного тезиса.
Эссе: Искусство и технологии →
Уроки генеративных арт-систем →