Interactive Lab
63 simulations across physics, AI/ML, control theory, signal processing, and math — all running live in your browser. No installs, no libraries, just pure JavaScript.
Physics & Dynamics
Classical mechanics, oscillations, waves, collisions, and soft-body physics simulated in real-time.
Under/over/critically damped harmonic oscillator. Adjust mass, spring constant, and damping to see the time response.
How it works
A mass on a spring with damping. The damping ratio (ζ) determines behavior: underdamped oscillates, critically damped returns fastest without overshoot, overdamped is sluggish.
Drag the mass block to displace it, release to simulate. Sliders for mass, spring constant (k), and damping (c). Presets show each regime. Bottom plot shows x(t) over time.
Solves m·x'' + c·x' + k·x = 0. The damping ratio ζ = c/(2√(mk)) classifies the response. Uses RK4 integration.
1D string wave using finite differences. Click to create disturbances, toggle boundary conditions between fixed and free.
How it works
A vibrating string modeled by the wave equation u_tt = c²·u_xx. Disturbances propagate, reflect off boundaries, and interfere with each other.
Click and drag the string to deform it. Toggle boundary conditions: Fixed (reflects inverted), Free (reflects same), Absorbing (waves exit). Presets: Pulse, Sine, Gaussian, Pluck.
Central finite differences in space and time. Multiple substeps per frame for stability. Color maps displacement (blue=negative, red=positive).
2D elastic collisions with momentum and energy conservation. Drag to launch balls, adjust mass and restitution.
How it works
Balls collide and exchange momentum. With restitution=1 (elastic), kinetic energy is perfectly conserved. Lower values simulate inelastic collisions where energy is lost.
Click and drag to spawn a ball — hold time = mass, drag direction = velocity. Sliders for restitution and gravity. Presets: Pool Break, Newton's Cradle. Watch momentum/energy stats stay conserved.
Impulse-based collision response: j = -(1+e)·(v_rel·n)/(1/m_a + 1/m_b). Substep integration for accurate detection at high speeds.
A grid of particles connected by distance constraints. Grab and drag the cloth, enable wind, pin/unpin corners.
How it works
Cloth is modeled as a grid of particles connected by springs (distance constraints). Verlet integration provides stable, realistic motion without explicit velocity tracking.
Click and drag to grab the cloth. Toggle wind on/off, adjust wind strength. Click corners to pin/unpin them. Right-click or shift+click to tear the cloth at that point.
Verlet integration: x_new = 2·x - x_old + a·dt². Distance constraints solved iteratively (Jakobsen method). Gravity + wind forces applied each frame.
Rope and chain physics using Verlet integration. Grab any point and swing it around. Add/remove segments.
How it works
A chain of particles connected by fixed-length constraints. Verlet integration naturally handles momentum and inertia, creating realistic rope/chain behavior.
Click and drag any point on the rope to move it. Toggle between rope (flexible) and bridge (horizontal) modes. Adjust segment count, gravity, and constraint iterations.
Same Verlet integration as cloth, but 1D chain. More constraint iterations = stiffer rope. Damping applied to prevent infinite oscillation.
Interactive fluid dynamics — swirl colorful smoke with your mouse. Real-time Navier-Stokes solver with advection, diffusion, and pressure projection.
How it works
The Navier-Stokes equations govern fluid motion. This solver simulates velocity and density fields on a grid — your mouse injects velocity and dye, creating swirling, turbulent patterns in real-time.
Move your mouse to inject dye and velocity. Adjust viscosity (fluid thickness), diffusion rate, and color mode. Toggle vorticity confinement for sharper swirls. Clear to reset the field.
Stam's stable fluid solver: advect → diffuse → project (pressure solve via Gauss-Seidel). Incompressibility enforced via Helmholtz-Hodge decomposition. Vorticity confinement adds curl back to prevent numerical damping.
Paint with elements — sand, water, fire, oil, plant, stone — and watch them interact. Emergent physics from simple cellular automata rules.
How it works
Each pixel is an element with simple rules: sand falls and piles, water flows sideways, fire rises and burns, oil floats on water, plants grow toward light. Complex emergent behavior from simple local interactions.
Select an element from the palette. Click/drag to paint on the canvas. Adjust brush size. Elements interact: water extinguishes fire, fire burns oil and plants, sand displaces water. Clear to reset.
Cellular automaton with per-element update rules. Sand: swap with empty/water below. Water: flow to sides if can't fall. Fire: probabilistic spread + lifetime decay. Processed bottom-up with randomized horizontal scan to prevent directional bias.
Place positive and negative charges — watch electric field lines and equipotential contours form in real-time.
How it works
Electric charges create fields that exert forces on other charges. Field lines show direction, density shows strength. Equipotential lines show surfaces of constant voltage — always perpendicular to field lines.
Click to place positive charge, shift+click for negative. Drag charges to reposition. Toggle field lines, equipotentials, and vector field display. Adjust charge magnitude. Right-click to remove.
Coulomb's law: E = kQ/r². Superposition: total field = vector sum of all charges. Field lines traced via RK4 integration. Potential V = kQ/r, contours found by marching squares.
Watch particles do random walks — the foundation of diffusion, stock prices, and statistical mechanics. Track mean squared displacement.
How it works
Brownian motion: particles buffeted by invisible molecules take random walks. Einstein showed this proves atoms exist. The mean squared displacement grows linearly with time — a fundamental result in physics.
Adjust particle count, step size, and temperature. Toggle trail rendering. Watch the MSD plot grow linearly. Click to add a large tracked particle that shows its full random walk path.
Each step: Δx = N(0,σ), Δy = N(0,σ) where σ ~ √(kT). Mean squared displacement: ⟨r²⟩ = 4Dt where D = kT/(6πηa). Einstein relation connects microscopic to macroscopic.
A virtual optical bench — place mirrors, lenses, and prisms to bend light rays. See Snell's law in action, total internal reflection, and chromatic dispersion through prisms.
How it works
Geometric optics simulation with ray tracing. Light rays reflect off mirrors (angle in = angle out), refract through lenses (converging/diverging), and split through prisms (dispersion). Snell's law governs all refractions.
Place flat/curved mirrors, convex/concave lenses, and triangular prisms on the bench. Drag to reposition, rotate with handles. Adjust light source angle and number of rays. Toggle wavelength-dependent dispersion.
Snell's law: n₁sinθ₁ = n₂sinθ₂. Total internal reflection when sinθ > n₂/n₁. Thin lens: 1/f = 1/do + 1/di. Dispersion: n(λ) varies with wavelength — violet bends more than red.
Place bar magnets and watch field lines form — see attraction, repulsion, and how compass needles align. The invisible force made visible.
How it works
Magnetic dipole fields computed from first principles. Each magnet is a dipole with north/south poles. Field lines traced via RK4 integration from north to south. Compass needles show local field direction and strength.
Click to place magnets, drag to move. Rotate magnets with scroll wheel. Toggle field lines, compass grid, and field strength heatmap. Flip polarity with double-click. Presets: attraction, repulsion, quadrupole.
B = (μ₀/4π)(3(m·r̂)r̂ - m)/r³ for each dipole. Total field: superposition. Field lines: dr/ds = B(r)/|B(r)| integrated with RK4. Torque on compass: τ = m × B.
A pendulum swings over 3 colored magnets — the fractal basin map reveals which magnet captures each starting position.
How it works
Magnetic pendulum exhibits chaotic basins of attraction. Tiny changes in starting position lead to different endpoints.
Drag the pendulum to set starting position. Background map colors each pixel by which magnet captures it.
F_mag = k(r_i - r)/|r_i - r|³ per magnet. Gravity: -g/L × r. Friction: -b×v. RK4 integration.
Vibrating plate with virtual sand — watch stunning geometric patterns form on nodal lines as you sweep through resonance frequencies.
How it works
Sand on a vibrating plate settles on nodal lines (zero-displacement points).
Sweep frequency slider to find resonance modes. Toggle square and circular plates.
z(x,y) = A·cos(nπx/L)·cos(mπy/L). Sand force: F = -∇|z|.
Balls cascade through pegs, bouncing left or right — watch the normal distribution emerge from pure randomness.
How it works
Each ball makes N independent left/right choices. By CLT, the sum approaches a Gaussian.
Adjust rows (5-20), ball drop rate, probability bias. Overlay normal curve.
P(left) = p. After N levels: Binomial(N,p). CLT in action.
Paint hot and cold regions on a 2D plate — watch heat spread in real-time following the diffusion equation.
How it works
Heat diffuses from hot to cold. Rate depends on thermal conductivity.
Paint hot (red) and cold (blue) with mouse. Draw insulating walls.
∂T/∂t = α∇²T. Explicit Euler: T_new = T + αdt(neighbors - 4T).
Race balls down different curves — the shortest time path is NOT the shortest distance!
How it works
Find the curve of fastest descent under gravity. Answer: a cycloid.
Watch 4 preset curves race. Draw your own curve.
Cycloid: x = r(θ - sinθ), y = r(1 - cosθ). Euler-Lagrange solution.
15 pendulums of slightly different lengths create traveling waves, standing waves, and apparent chaos.
How it works
Independent pendulums with carefully chosen lengths create beating patterns.
Start/pause. Adjust count (10-25). Change cycle period.
T_k = T_cycle/(N_base + k). θ_k(t) = A·cos(2πt/T_k).
A moving source emits wavefronts — watch them compress ahead and stretch behind. See Mach cones.
How it works
Moving source: wavefronts bunch up in front and spread behind.
Drag source at different speeds. Toggle subsonic/supersonic.
f_obs = f_src × v_sound/(v_sound ∓ v_src). Mach cone: sin(θ) = 1/M.
Conformal mapping transforms a circle into an airfoil — watch potential flow streamlines.
How it works
Joukowski map w = z + 1/z transforms circles into airfoil shapes.
Drag circle center and radius. Adjust angle of attack.
w = z + 1/z. Kutta-Joukowski: L = ρUΓ (lift per span).
Launch spacecraft, slingshot around planets — design gravitational assist trajectories.
How it works
Gravitational assists: spacecraft gains energy from planetary flyby.
Set launch angle and speed. Place planets. Trail colored by velocity.
a = -GM/r² per body. RK4 integration. ΔV ≈ 2v_planet × sin(θ/2).
AI & Machine Learning
Neural networks, clustering, evolutionary algorithms, swarm intelligence, and reinforcement learning — all visualized step by step.
2D classification with decision boundary heatmap. Add data points, adjust hidden layers and neurons, watch training live.
How it works
A feedforward neural network learns to classify 2D points into two classes. The colored heatmap shows the network's decision boundary — where it thinks each class belongs.
Left click = blue point, right/shift+click = orange point. Choose architecture (hidden layers/neurons). Hit Train to watch the boundary evolve. Presets: Circle, XOR, Spiral, Linear.
Backpropagation with SGD. Tanh hidden activations, sigmoid output. Binary cross-entropy loss. Heatmap renders network output at each pixel.
Single perceptron draws a linear boundary. XOR problem shows why you need hidden layers. Place +/- points to experiment.
How it works
A single perceptron can only learn linear boundaries. The XOR problem is non-linear — it proves you need at least one hidden layer (MLP) to solve it. This is a fundamental insight in ML history.
Click to place +/- points. Toggle Perceptron vs MLP mode. Load XOR preset with Perceptron to see it fail, then switch to MLP to see it succeed. Adjust learning rate and speed.
Perceptron: w·x + b, sign activation. MLP: input→hidden(tanh)→output(sigmoid) with full backpropagation. Heatmap shows decision boundary.
Step through forward and backward passes on a small network. Watch gradients flow backwards and weights update.
How it works
Backpropagation is how neural networks learn. The forward pass computes predictions, the backward pass computes gradients (how much each weight contributed to the error), and weights are updated to reduce loss.
Step Forward → see activations flow left to right. Step Backward → see gradients flow right to left. Update Weights → apply gradient descent. Auto Train cycles through all XOR examples automatically.
Architecture: 2→3→1. Forward: z=Wx+b, a=activation(z). Backward: chain rule to compute ∂L/∂W. Update: W -= lr·∂L/∂W. Loss decreases over training.
Watch a disease spread through a population. Susceptible (blue) → Infected (red) → Recovered (green). Adjust transmission and recovery rates.
How it works
The SIR model divides a population into Susceptible, Infected, and Recovered. Infected individuals spread the disease to nearby susceptible ones. The basic reproduction number R₀ determines if an epidemic grows or dies out.
Adjust infection radius, transmission probability, recovery time, and population size. Click to infect a specific person. Watch the epidemic curve (infections over time) flatten or spike. Vaccination slider immunizes a percentage of the population.
dS/dt = -βSI/N, dI/dt = βSI/N - γI, dR/dt = γI. R₀ = β/γ. When R₀ > 1, epidemic grows. Agent-based simulation shows individual interactions.
Birds flock using three simple rules: separation, alignment, and cohesion. No leader, no central control — just emergence.
How it works
Craig Reynolds' Boids (1987): each bird follows 3 local rules — avoid crowding (separation), steer toward average heading (alignment), steer toward average position (cohesion). Flocking emerges from these simple rules alone.
Adjust sliders for separation, alignment, cohesion weights. Change flock size and visual range. Click to add an obstacle that birds avoid. Toggle predator mode — click to scare the flock.
Each boid computes steering vectors from nearby neighbors within visual range. Forces are weighted and summed. Speed is clamped. Wrap-around or wall-avoidance at boundaries.
Watch a REINFORCE agent learn to play Flappy Bird from scratch — see the neural network improve episode by episode until it masters the game.
How it works
Policy gradient RL: the agent observes bird position, velocity, and pipe gaps, then outputs a flap probability. After each episode, REINFORCE updates the policy using the reward signal. No Q-tables, just gradient ascent on expected return.
Watch training automatically, or play yourself with spacebar/click. Adjust learning rate, discount factor, and game speed. View reward plot over episodes. Toggle neural network visualization to see weights evolve.
∇J(θ) = E[Σ ∇log π(a|s;θ) · G_t]. Policy: π(a|s) = σ(Wx+b). Discount: G_t = Σγ^k r_{t+k}. Baseline subtraction reduces variance. Simple but powerful — this is how AlphaGo started.
Evolving neural network topology — watch creatures develop brains from scratch. Networks grow connections and neurons through mutation and crossover.
How it works
NEAT (NeuroEvolution of Augmenting Topologies) evolves both weights AND structure of neural networks. Starts with minimal networks, adds nodes/connections via mutation. Speciation protects innovation. Fitness = survival time avoiding obstacles.
Watch generations evolve. Adjust population size, mutation rates, and obstacle difficulty. View the champion's network topology in real-time. Species are color-coded. Fitness graph shows improvement.
Genome: list of node genes + connection genes with innovation numbers. Crossover aligns by innovation #. Distance: δ = c₁E/N + c₂D/N + c₃W̄. Speciation: δ < threshold → same species. Fitness sharing prevents premature convergence.
Play tic-tac-toe against a perfect AI — see the minimax game tree unfold with alpha-beta pruning. Watch how the AI evaluates every possible future.
How it works
Minimax: assume your opponent plays optimally. Build the full game tree, score leaves (+1 win, -1 loss, 0 draw), propagate scores up — maximizing player picks max, minimizing picks min. Alpha-beta pruning skips branches that can't affect the decision.
Play X against the AI. Watch the game tree expand in real-time showing all evaluated positions. Pruned branches shown in gray. Toggle alpha-beta pruning on/off to see the speedup. View node count comparison.
minimax(node) = max(minimax(children)) if maximizing, min otherwise. Alpha-beta: if α ≥ β, prune. Without pruning: O(b^d). With pruning: O(b^(d/2)) — square root speedup. Perfect play: tic-tac-toe always draws.
Watch an AI master Snake — using Hamiltonian cycles and A* pathfinding to eat every apple and fill the entire board without dying.
How it works
Three AI strategies: (1) Greedy A* — finds shortest path to food, fast but dies easily. (2) Hamiltonian cycle — follows a space-filling path, guaranteed to win but slow. (3) Hybrid — uses A* shortcuts on the Hamiltonian cycle, fast AND safe.
Choose AI strategy (Greedy, Hamiltonian, Hybrid). Adjust game speed. Play manually with arrow keys. Watch the Hamiltonian cycle overlay. Score and survival stats tracked. Grid size adjustable.
Hamiltonian cycle visits every cell exactly once. A*: f(n) = g(n) + h(n) with Manhattan distance heuristic. Hybrid: take A* shortcut only if it doesn't break the Hamiltonian ordering — guarantees the tail follows.
See how CNNs see — apply convolution kernels to images and watch edges, blurs, and features emerge. The building block of all modern computer vision.
How it works
A convolution kernel (3×3 or 5×5 matrix) slides across an image, computing weighted sums at each position. Different kernels detect different features: edges, corners, textures. This is exactly what CNNs learn automatically during training.
Choose source image or draw your own. Select preset kernels (Sobel edge, Gaussian blur, sharpen, emboss) or edit kernel values manually. Apply multiple kernels in sequence to see feature pipelines. Animate the sliding window.
(f * k)(x,y) = ΣΣ f(x-i, y-j)·k(i,j). Sobel: detects gradients. Gaussian: weighted average (blur). Laplacian: second derivative (edges). Kernel values are exactly what CNN layers learn via backpropagation.
Watch arrows grow as an RL agent learns — Q-Learning vs SARSA comparison.
How it works
Q-Learning: off-policy TD. SARSA: on-policy. Cliff-walking difference.
Design grid: walls, rewards, cliffs. Choose algorithm. ε slider.
Q(s,a) ← Q(s,a) + α[r + γ·max Q(s′,a′) - Q(s,a)].
A 2D grid of neurons morphs to match data — topology preservation emerges.
How it works
Kohonen SOM: BMU and neighbors move toward input. Grid learns topology.
Choose dataset. Watch grid morph. Toggle color mode.
BMU: c = argmin ||x - w_i||. Update: w_i ← w_i + α·h(i,c)·(x - w_i).
See how Stable Diffusion works — data dissolves into noise, then denoises back.
How it works
Forward: add noise. Reverse: learned denoising. Score function guides recovery.
Watch forward + reverse. Step through timesteps. Score arrows.
x_t = √(α_t)x₀ + √(1-α_t)ε. Reverse learned denoising.
Type a sentence — see which words attend to which via arcs and heatmaps.
How it works
Self-attention: Q,K,V vectors. Weight = softmax(QK^T/√d).
Type sentence. View arcs or heatmap. Multi-head tabs.
Attention(Q,K,V) = softmax(QK^T/√d_k)V.
Creatures with bones and muscles evolve to walk — bizarre but effective gaits emerge.
How it works
Morphology + controller co-evolution. Genetic algorithm optimizes both.
Watch generations. View best creature. Fitness graph.
Fitness = distance. Verlet physics. Muscles: F = A·sin(ωt+φ).
Hot particle jumps wildly, cools down and settles — escapes local minima.
How it works
High T: accept worse. Low T: improvements only. Cooling schedule controls.
Choose landscape. Temperature gauge. Compare with greedy.
Accept worse: P = exp(-ΔE/T). Cooling: T_new = α·T.
The algorithm behind AlphaGo — game tree grows via UCB1 selection.
How it works
Build tree incrementally. Random rollouts estimate value. No heuristic needed.
Play Connect-4 against MCTS. Watch tree grow. Adjust thinking time.
UCB1: Q/N + c√(ln N_parent/N). Select→Expand→Rollout→Backprop.
Click to add data — watch prior morph into posterior. Bayes theorem live.
How it works
Prior × Likelihood = Posterior. Observations sharpen beliefs.
Click for coin flips. Watch Beta posterior. Adjust prior (α,β).
P(θ|data) ∝ P(data|θ)·P(θ). Beta(α+h, β+t).
Fractal-like tree explosively grows through obstacles — robot motion planning.
How it works
Sample random, find nearest, extend. Tree fills space organically.
Place start, goal, obstacles. Toggle RRT vs RRT*. Adjust step size.
x_new = x_nearest + step toward x_rand. RRT* rewires for optimality.
Soft clustering — Gaussian ellipses breathe and shift as EM iterates.
How it works
E: soft assignments. M: update means and covariances. Converges to MLE.
Click to place points. Choose K. Watch EM animate. Compare K-Means.
E: r_nk = π_k·N(x|μ_k,Σ_k)/Σ. M: update μ, Σ, π.
King - Man + Woman = Queen. Vector arithmetic captures semantic meaning.
How it works
Words as vectors: similar words nearby. Arithmetic = analogies.
Click words for neighbors. Drag for arithmetic (A-B+C=?).
cos_sim = u·v/(|u||v|). argmax cos(v, v_b-v_a+v_c).
Gaussian Process learns unknown function — acquisition function guides sampling.
How it works
GP fits belief. Expected Improvement finds best next sample.
Choose function. Watch GP update. View mean, CI, and EI.
GP: μ(x), σ²(x). EI = E[max(f(x)-f*, 0)]. RBF kernel.
Robot follows attractive/repulsive fields — beautiful gradient visualization.
How it works
Goal attracts, obstacles repel. Gradient descent navigation.
Drag goal and obstacles. Toggle arrows, heatmap, streamlines.
U_att = ½k|x-x_goal|². U_rep = ½k(1/d-1/d₀)². F = -∇U.
3D stick figure learns to walk via policy gradient — stumbling to stable gait.
How it works
Articulated body with torque joints. RL learns forward locomotion.
Watch training. Speed up. View reward plot. Rotate 3D view.
Policy: π(a|s) = N(μ_θ(s), σ²). REINFORCE gradient update.
Record stick figure motion — watch agent learn to imitate it.
How it works
Behavioral cloning + DAgger. Supervised learning on expert demos.
Record demo. Train imitator. Side-by-side expert vs learner.
BC: min Σ||π_θ(s)-a*||². DAgger: aggregate on-policy states.
Control & Estimation
State estimation, sensor fusion, and control systems with real-time noise and uncertainty.
Track a noisy trajectory with a Kalman Filter. See truth (blue), measurement (red), and estimate (green) with uncertainty ellipses.
How it works
Sensors are noisy. The Kalman Filter optimally combines a motion model (prediction) with noisy measurements (update) to produce a better estimate than either alone.
Adjust process noise (how unpredictable the motion is), measurement noise (how noisy the sensor is), and update rate. The green trail shows the filter's cleaned-up estimate.
Predict: x̂ = F·x, P = F·P·F' + Q. Update: K = P·H'·(H·P·H' + R)⁻¹, x̂ += K·(z - H·x̂). The Kalman gain K balances trust between model and measurement.
Complementary filter fusing gyroscope and accelerometer data. Adjust drift, noise, and blending factor to see fusion quality.
How it works
Gyroscopes drift over time but track fast changes. Accelerometers are noisy but don't drift. A complementary filter blends both: α×gyro + (1-α)×accel gives you the best of both worlds.
Move mouse to tilt the virtual IMU. Adjust gyro drift rate, accelerometer noise, and blending factor (α). Watch how each sensor alone fails but fusion succeeds. RMS error shows quality.
angle = α·(angle + gyro·dt) + (1-α)·accel_angle. High α trusts gyro more (good for fast motion), low α trusts accelerometer more (good for steady state).
Robotics & Planning
Inverse kinematics, circuit analysis, and robotic planning algorithms.
Compare CCD, Jacobian Transpose, and FABRIK inverse kinematics solvers. Drag the target to move a multi-joint arm.
How it works
Inverse Kinematics (IK): given a target position, compute joint angles that place the end-effector there. Three different algorithms solve this differently — each with tradeoffs.
Drag the target point. Watch three arms solve simultaneously: CCD (iterative joint rotation), Jacobian (gradient-based), FABRIK (forward-backward reaching). Compare convergence speed and smoothness.
CCD: rotate each joint to minimize tip-to-target distance. Jacobian: J·Δθ = Δx, solve for Δθ. FABRIK: alternately reach forward then pull back from base.
Build circuits with batteries, resistors, LEDs, and switches. Real-time Kirchhoff solver shows current flow and voltages.
How it works
Place components on a grid to build circuits. The solver applies Kirchhoff's laws to compute voltages and currents in real-time. LEDs glow with current, and pop if overloaded!
Select a component type (battery, resistor, LED, switch, wire) and click grid cells to place. Click switches to toggle. Hover components to see voltage/current info. Try presets: Series, Parallel, Voltage Divider.
Kirchhoff's Current Law (sum of currents = 0) + Kirchhoff's Voltage Law (sum of voltages = 0) + Ohm's Law (V = IR). BFS traces current paths.
Signal Processing
Frequency analysis, Fourier transforms, and signal decomposition visualized.
Compose signals from sine waves, apply filters (low-pass, high-pass, band-pass), and see the FFT decomposition in real-time.
How it works
Any signal can be decomposed into sine waves of different frequencies. The FFT (Fast Fourier Transform) reveals which frequencies are present and how strong they are.
Add sine waves with custom frequency and amplitude. Draw freehand signals. Apply filters to remove frequencies. Toggle playback to hear the signal through Web Audio API.
Cooley-Tukey radix-2 FFT: O(n·log·n) decomposition of time-domain signal into frequency-domain. Inverse FFT reconstructs the filtered signal.
Draw any shape freehand and watch epicycles decompose it into rotating circles. Adjust the number of circles and speed.
How it works
Any closed curve can be approximated by a sum of rotating circles (epicycles). More circles = more detail. This is the Fourier Series in action — decomposing a shape into frequency components.
Draw any shape on the canvas. After releasing, epicycles animate to trace your drawing. Slider adjusts how many circles are used (fewer = rougher approximation). Presets: Circle, Square, Star, Heart.
Discrete Fourier Transform treats (x,y) path as complex numbers. Each DFT coefficient becomes one rotating circle. Coefficients sorted by magnitude — largest circles contribute most.
Math & Visualization
Fractals, cellular automata, strange attractors, geometric constructions, and procedural generation.
Click to zoom into the Mandelbrot set with smooth coloring. Hover to see the corresponding Julia set. Adjust max iterations.
How it works
The Mandelbrot set: iterate z = z² + c. If the sequence stays bounded, c is in the set (black). The boundary has infinite detail — you can zoom forever and find new patterns.
Click to zoom in 3x. Shift/right-click to zoom out. Hover to see the Julia set for that point. Toggle Julia mode. Adjust iterations (more = finer detail at deep zooms). Choose color scheme: Fire, Ocean, Neon, Grayscale.
z_{n+1} = z_n² + c. Color from smooth iteration count: n + 1 - log(log|z|)/log(2). Two-pass rendering: fast preview then full resolution.
Conway's Game of Life with preset patterns (glider, pulsar, LWSS). Click to toggle cells, adjust simulation speed.
How it works
Four simple rules create infinite complexity. A cell is born with exactly 3 neighbors, survives with 2-3, and dies otherwise. From these rules emerge gliders, oscillators, and even Turing-complete computers.
Click/drag to paint cells. Play/Pause and Step buttons. Speed slider (1-30 gen/sec). Random fill. Presets: Glider (moves), Pulsar (oscillates), LWSS (spaceship), Gosper Gun (makes gliders), R-pentomino (chaos).
B3/S23 rule on a toroidal grid. Each cell counts its 8 neighbors. Simple counting produces emergence — one of the deepest ideas in computation.
Generate fractal trees and plants from simple rewriting rules. Adjust angle, iterations, and axiom to create botanical structures.
How it works
Lindenmayer Systems (L-Systems) model plant growth through string rewriting. Start with a simple axiom, apply rules repeatedly, then interpret the result as turtle graphics commands.
Choose presets: Binary Tree, Fern, Koch Snowflake, Dragon Curve, Sierpinski. Adjust branching angle, number of iterations (detail level), and segment length. Click Grow to re-generate.
Rules like F→F[+F]F[-F]F replace characters iteratively. F=draw forward, +=turn right, -=turn left, [=save state, ]=restore state. Simple rules create stunning botanical complexity.
Gray-Scott model creates Turing patterns — spots, stripes, and labyrinths emerge from two diffusing chemicals.
How it works
Alan Turing proposed (1952) that biological patterns (zebra stripes, leopard spots) emerge from two chemicals that diffuse and react. The Gray-Scott model simulates this — simple PDEs create stunning organic patterns.
Click/drag to seed chemical B. Adjust feed rate (f) and kill rate (k) to get different pattern types: spots, stripes, waves, mitosis. Presets for common parameter combinations. Clear to restart.
∂A/∂t = Dₐ∇²A - AB² + f(1-A). ∂B/∂t = D_b∇²B + AB² - (k+f)B. Laplacian computed via convolution. Diffusion rates Dₐ > D_b create instability → patterns.
Clifford, De Jong, and other strange attractors rendered as millions of points. Tiny parameter changes create wildly different artwork.
How it works
Strange attractors are iterated function systems — apply a simple formula millions of times, plot each point, and beautiful structure emerges from chaos. Each attractor type has its own formula with 2-4 parameters.
Choose attractor type: Clifford, De Jong, Svensson, Bedhead. Adjust parameters with sliders — tiny changes create completely different patterns. Randomize for surprises. Color based on density or iteration count.
Clifford: x' = sin(a·y) + c·cos(a·x), y' = sin(b·x) + d·cos(b·y). Iterate millions of times, accumulate point density, render as a heatmap. Pure mathematical beauty.
Draw walls, set start/end — watch A*, Dijkstra, BFS, and DFS explore the grid. Generate mazes and compare algorithm performance.
How it works
Pathfinding algorithms search for the shortest route between two points. BFS explores uniformly, Dijkstra by cost, A* adds a heuristic to focus search. Watch how each algorithm explores differently — some are smart, some are brute-force.
Click/drag to draw walls. Drag start (green) and end (red) nodes. Choose algorithm: A*, Dijkstra, BFS, DFS, Greedy BFS. Generate mazes: Recursive Division, Prim's, Kruskal's. Adjust speed, toggle diagonal movement.
A*: f(n) = g(n) + h(n) where g = cost so far, h = heuristic (Manhattan/Euclidean distance). Dijkstra: same but h=0. BFS: unweighted shortest path. DFS: goes deep first, not optimal.
Drop a ball on a 2D loss landscape — watch SGD, Momentum, Adam, and RMSProp race to find the minimum. Compare convergence behavior.
How it works
Neural networks learn by minimizing a loss function. Gradient descent walks downhill, but the landscape has saddle points, local minima, and ravines. Different optimizers handle these differently.
Click to place starting point(s). Choose optimizers to compare (run simultaneously). Adjust learning rate. Choose landscape: Beale, Rosenbrock, Himmelblau, Rastrigin, custom. Watch trajectories and loss curves.
SGD: θ -= lr·∇L. Momentum: v = β·v + ∇L, θ -= lr·v. Adam: combines momentum + RMSProp with bias correction. Each handles curvature differently — Adam adapts per-parameter.
Diffusion-Limited Aggregation — random walkers stick on contact, building snowflake and coral-like crystal structures.
How it works
Particles random-walk until they touch the growing crystal, then stick permanently. This simple rule produces branching, fractal structures that look like snowflakes, lightning, or mineral dendrites.
Adjust stickiness (probability of attaching on contact), particle spawn rate, and max particles. Choose seed shape: Point (center), Line (bottom), Circle. Watch the crystal grow in real-time.
Random walk: each step moves ±1 in x or y. On contact with crystal, particle sticks with probability p. Fractal dimension of DLA clusters ≈ 1.71. Discovered by Witten & Sander (1981).
Nodes repel each other, edges act as springs. Watch a network layout organize itself through physics simulation.
How it works
Force-directed layout uses physics to arrange graphs: nodes repel like charged particles (Coulomb's law), edges attract like springs (Hooke's law). The system settles into a readable layout naturally.
Click to add nodes, drag between nodes to create edges. Drag nodes to reposition. Adjust repulsion strength, spring length, and damping. Presets: Tree, Mesh, Ring, Random. Delete nodes with right-click.
Repulsion: F = k/d² (all pairs). Attraction: F = -k·(d - rest_length) (connected pairs). Velocity Verlet integration with damping for convergence.
Watch sorting algorithms race — Bubble, Selection, Insertion, Merge, Quick, and Heap sort compared visually.
How it works
Different sorting algorithms have different strategies and performance. O(n²) algorithms (Bubble, Selection) are slow but simple. O(n·log·n) algorithms (Merge, Quick, Heap) are fast but complex. See the difference visually.
Choose algorithm, array size, and initial order (random, nearly sorted, reversed). Hit Sort to watch bars rearrange. Comparisons and swaps are counted. Speed slider controls animation pace.
Bubble: O(n²) worst case. Merge: O(n·log·n) guaranteed. Quick: O(n·log·n) average, O(n²) worst. Heap: O(n·log·n) guaranteed. Visual bars make the differences obvious.
The golden angle creates sunflower spirals — nature's most efficient packing pattern. Adjust the angle to see why 137.5° is special.
How it works
Place each seed at angle n × θ, distance √n from center. At θ = 137.507...° (golden angle = 360°/φ²), seeds pack perfectly with no gaps. Any other angle creates visible spokes — even 0.01° off ruins the pattern.
Adjust angle (0-360°, default golden angle). Adjust seed count (50-2000). Toggle color modes: by index, by spiral arm, rainbow. Animate the angle to see patterns form and break. Size scaling options.
Position of seed n: r = c·√n, θ = n·137.507...°. The golden angle = 360°·(1 - 1/φ) where φ = (1+√5)/2. Fibonacci numbers appear in the spiral arm counts (8, 13, 21, 34...).
Connect numbered pins with strings following a rule — cardioids, nephriods, and stunning geometric envelopes emerge from straight lines.
How it works
Place pins around a circle. Connect pin n to pin (n×k) mod total. Different multipliers k create different curves: k=2 → cardioid, k=3 → nephroid, k=4 → 3-cusped epicycloid. Straight lines create curved envelopes.
Adjust number of pins (20-360), multiplier k (2-50), and line opacity. Animate k to morph between patterns. Color modes: solid, rainbow by connection, gradient by angle. Pin shape: circle, polygon, custom.
Connect pin i to pin (i×k) mod N. The envelope of these chords forms a curve called an epicycloid. When k and N share factors, fewer distinct strings appear. Prime N gives the most complex patterns.