The Right Conversation at the Right Time
Understanding sometimes happens through unexpected conversations rather than structured learning. One night, a friend explained how matrices work, how a grid of numbers can represent 3D space, rotate objects, and transform them. After encountering matrix math in documentation without fully grasping it, the concept finally made sense. This success of this mini project is the result of that solidified understanding.
A few days later, I came across a photo of a LED marquee light animation on a spherical display. Instead of scrolling text, someone had programmed it to show a picture of a squirrel using glowing dots.
I used Cursor to scaffold a Vite project and started working with Three.js. This blog is a recount of the implementation and steps it took to get a working demo.
The Plan
The goal was to recreate the LED marquee effect in 3D, wrapped around a sphere with mouse interaction for rotation. The requirements were:
- 60fps performance
- Simple React component interface
- Interactive controls (dragging, zooming)
- LED appearance
Picking the Stack
React Three Fiber provided a reasonable middle ground between React familiarity and 3D graphics learning. It allows writing Three.js scenes using React components.
The stack:
- React Three Fiber
- @react-three/drei
- @react-three/postprocessing
- Vite
- TypeScript
The Hard Part: Text on a Sphere
The core challenge was mapping flat text onto a sphere with seamless scrolling. The approach treats the sphere as covered in a grid of LEDs, similar to the physical orb from the photo. Each LED has a binary state: on or off.
The process:
- Determine LED positions on the sphere
- Calculate which LEDs should be “on” for text display
- Implement scrolling
- Render the result properly
The LED Grid
The sphere divides based on latitude and longitude:
- Latitude spans pole to pole (π radians)
- Longitude wraps around the equator (2π radians)
LED density is determined by ledSpacing and columnSpacing parameters:
const rows = Math.ceil(Math.PI / ledSpacing)
const cols = Math.ceil((2 * Math.PI) / columnSpacing)This creates a virtual grid that maps onto the sphere’s surface.
Turning Text Into LED Data
The process for converting text to LED data involves rendering to an off-screen canvas and sampling pixel data to build the LED matrix.
Step 1: Create off-screen canvas
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')
if (!ctx) return
const canvasWidth = 2048
const canvasHeight = 256
canvas.width = canvasWidth
canvas.height = canvasHeightThe canvas is not added to the DOM. It exists only to render text and read pixels. The dimensions are wide (2048px) by short (256px) to support multiple text repetitions.
Step 2: Configure text styling
ctx.fillStyle = '#000000'
ctx.fillRect(0, 0, canvasWidth, canvasHeight)
ctx.fillStyle = '#ffffff'
ctx.font = 'bold 80px monospace'
ctx.textAlign = 'center'
ctx.textBaseline = 'middle'Black background with white text. White pixels will be detected as “on” LEDs.
Step 3: Draw text multiple times for seamless wrapping
const textWidth = ctx.measureText(text).width // e.g., 500px
const spacing = textWidth + 200 // e.g., 700px between instances
const repetitions = Math.ceil(canvasWidth / spacing) + 2
for (let i = -1; i < repetitions; i++) {
const x = canvasWidth / 2 + i * spacing
ctx.fillText(text, x, canvasHeight / 2)
}This creates a repeating pattern: ...HELLO HELLO HELLO...
If text “HELLO” is 500px wide with 700px spacing on a 2048px canvas:
- Position -1: x = 1024 - 700 = 324px
- Position 0: x = 1024px (center)
- Position 1: x = 1024 + 700 = 1724px
Multiple copies enable seamless horizontal scrolling. When the shader samples with a scroll offset using modulo arithmetic, there is always text available to display.
Step 4: Extract pixel data
const imageData = ctx.getImageData(0, 0, canvasWidth, canvasHeight)
const data = imageData.data // RGBA array: 2048 × 256 × 4 = 2,097,152 bytesEach pixel uses 4 bytes (R, G, B, A).
Step 5: Map pixels to LED matrix
const messageRowStart = Math.floor(rows * 0.4) // 40% from top
const messageRowEnd = Math.floor(rows * 0.6) // 60% from top
const messageRows = messageRowEnd - messageRowStart
const singleTextWidth = spacing
for (let canvasY = 0; canvasY < canvasHeight; canvasY++) {
for (let canvasX = 0; canvasX < canvasWidth; canvasX++) {
const pixelIndex = (canvasY * canvasWidth + canvasX) * 4
const r = data[pixelIndex]
if (r > 128) { // White or near-white pixel
// Map to single text instance using modulo to prevent duplication
const wrappedX = canvasX % singleTextWidth
const normalizedX = wrappedX / singleTextWidth // 0 to 1
const col = Math.floor(normalizedX * cols)
// Map Y to matrix rows (flipped: canvas top = higher row numbers)
const normalizedY = 1.0 - (canvasY / canvasHeight)
const messageRow = Math.floor(normalizedY * messageRows)
const row = messageRowStart + messageRow
if (row >= 0 && row < rows && col >= 0 && col < cols) {
matrix[row * cols + col] = 1 // Mark LED as "on"
}
}
}
}Text appears only in the middle band (40-60% from top), creating the belt effect. The modulo operation prevents overlapping characters from multiple text repetitions.
Step 6: Convert to texture
const matrixToTexture = (matrix: Uint8Array, rows: number, cols: number): THREE.DataTexture => {
const textureData = new Uint8Array(rows * cols)
for (let i = 0; i < matrix.length; i++) {
textureData[i] = matrix[i] * 255 // 0 or 255
}
return new THREE.DataTexture(textureData, cols, rows, THREE.RedFormat)
}The matrix transforms into a Three.js texture where each pixel represents LED state.
const matrixToTexture = (matrix: Uint8Array, rows: number, cols: number): THREE.DataTexture => {
const textureData = new Uint8Array(rows * cols)
for (let i = 0; i < matrix.length; i++) {
textureData[i] = matrix[i] * 255
}
return new THREE.DataTexture(textureData, cols, rows, THREE.RedFormat)
}Shaders: Where the Magic Happens
Custom shaders handle the LED rendering. These are programs that run on the GPU for each pixel.
The Vertex Shader
The vertex shader passes position data through:
varying vec3 vWorldPosition;
varying vec2 vUv;
void main() {
vUv = uv;
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vWorldPosition = worldPosition.xyz;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}The Fragment Shader
This determines pixel color through several steps.
Step 1: Convert 3D position to spherical coordinates
vec3 pos = normalize(vWorldPosition);
float latitude = asin(pos.y); // -π/2 to π/2 (south pole to north pole)
float longitude = atan(pos.z, pos.x); // -π to πThe fragment shader receives a 3D world position for each pixel on the sphere’s surface. Converting to spherical coordinates (latitude and longitude) allows mapping to the 2D LED matrix.
Step 2: Map spherical coordinates to LED grid
// Normalize latitude: -π/2 to π/2 → 0 to 1
float normalizedLat = (latitude + 1.5708) / 3.14159;
float row = floor(normalizedLat * uMatrixRows);
row = clamp(row, 0.0, uMatrixRows - 1.0);
// Normalize longitude: -π to π → 0 to 1, then flip direction
float normalizedLon = 1.0 - ((longitude + 3.14159) / (2.0 * 3.14159));
float scrolledLon = mod(normalizedLon + uScrollOffset, 1.0);
float col = floor(scrolledLon * uMatrixCols);
col = clamp(col, 0.0, uMatrixCols - 1.0);The 1.0 - normalizedLon flip corrects text direction. Without it, text appears backwards. The uScrollOffset is added before the modulo operation to create the scrolling effect.
Step 3: Sample the LED matrix texture
vec2 matrixUv = vec2(col / uMatrixCols, row / uMatrixRows);
vec4 matrixValue = texture2D(uLedMatrix, matrixUv);
float ledState = matrixValue.r; // 0.0 (off) or 1.0 (on)The calculated row and column index into the texture that stores the LED matrix. The red channel contains the LED state.
Step 4: Create circular LED bulbs
// Quantize UVs into LED grid
float columnGridSize = 1.0 / uColumnSpacing;
float rowGridSize = 1.0 / uLedSpacing;
vec2 gridUv = vec2(
floor(vUv.x * columnGridSize) / columnGridSize,
floor(vUv.y * rowGridSize) / rowGridSize
);
// Find center of current grid cell
vec2 gridCenter = vec2(
gridUv.x + uColumnSpacing * 0.5,
gridUv.y + uLedSpacing * 0.5
);
// Calculate distance from center
vec2 distFromCenter = abs(vUv - gridCenter);
float maxDist = min(uColumnSpacing, uLedSpacing) * 0.2;
float dist = length(distFromCenter);
// Create circular shape with smooth edges
float ledShape = smoothstep(maxDist, maxDist * 0.7, dist);The UV coordinates are quantized into a grid. Each grid cell represents one LED. Distance from the cell center determines if the pixel is part of the LED bulb or the transparent space between LEDs. smoothstep creates anti-aliased circular edges.
Step 5: Apply color and emissive glow
// Mix between dim and bright based on LED state
vec3 ledColor = mix(uDimColor, uBrightColor, ledState);
// Apply LED shape - creates circular bulbs with transparent space
vec3 finalColor = mix(vec3(0.0), ledColor, ledShape);
// Add emissive glow for bright LEDs
float emissive = ledState * ledShape * 1.5;
// Use ledShape as alpha for transparency
float alpha = ledShape;
gl_FragColor = vec4(finalColor + vec3(emissive * 0.2), alpha);The LED color is determined by its state: dim color for off, bright color for on. The ledShape value controls both the visible LED area and alpha transparency, making the space between LEDs transparent. Emissive values are boosted for bright LEDs, which the bloom post-processing effect amplifies into a glow.
Making It Scroll
Scrolling happens by incrementing a scrollOffset value each frame:
useFrame((state, delta) => {
scrollOffsetRef.current += delta * speed
if (scrollOffsetRef.current >= 1.0) {
scrollOffsetRef.current -= 1.0 // Wrap around
}
if (material && 'uniforms' in material) {
material.uniforms.uScrollOffset.value = scrollOffsetRef.current
}
})The shader uses this offset to shift the sampled column:
float scrolledLon = mod(normalizedLon + uScrollOffset, 1.0);The text wraps seamlessly because multiple copies were rendered on the canvas with proper spacing.
Why GPU Shaders Enable 60fps Performance
The orb maintains smooth 60fps performance with thousands of LEDs because the shader runs on the GPU rather than in JavaScript.
The Problem: JavaScript is Single-Threaded
If each LED were computed in JavaScript:
- Loop through thousands of LEDs each frame
- Each LED needs coordinate transforms, texture lookups, and color calculations
- JavaScript runs on the CPU, one instruction at a time
- This causes frame drops and stuttering
Example of the slow approach:
// Sequential, slow
for (let i = 0; i < 10000; i++) {
// Calculate LED position
// Transform coordinates
// Sample texture
// Mix colors
// Apply glow
}The Solution: Parallel GPU Computation
The fragment shader runs on the GPU, which processes many pixels simultaneously.
1. Parallel Execution
The GPU has hundreds or thousands of cores. Each pixel on screen is processed at the same time. For a 1920×1080 display, approximately 2 million pixels are computed in parallel. Even with thousands of LEDs, each pixel’s computational work is minimal.
2. Per-Pixel Operations
For each pixel, the shader:
- Converts 3D position to spherical coordinates
- Maps to matrix row and column
- Samples the LED matrix texture
- Calculates LED shape (circular dot)
- Mixes colors based on LED state
- Adds emissive glow
All of this happens in parallel across the GPU cores.
3. Minimal JavaScript Updates
JavaScript only updates one value per frame:
scrollOffsetRef.current += delta * speed
material.uniforms.uScrollOffset.value = scrollOffsetRef.currentGPU Architecture Advantages
Hardware-Accelerated Operations
- Thousands of cores running the same shader program
- Optimized for math-heavy, parallel workloads
- Texture sampling uses dedicated hardware
- No JavaScript overhead (no garbage collection, no event loop)
The Actual Hardware
When referring to “GPU,” this means the physical GPU hardware of the computer or device running the browser:
- Desktop: Dedicated GPU (NVIDIA, AMD, Intel) or integrated graphics
- Laptop: Dedicated or integrated GPU
- Mobile: GPU on the SoC (Adreno, Mali, Apple GPU)
The browser uses WebGL to communicate with the GPU. WebGL translates shader code into GPU instructions, and the browser’s graphics driver sends these to the GPU hardware. The shader code executes on the actual GPU cores, not the CPU.
What Happens Without a GPU
If a device lacks a GPU or has GPU support disabled, browsers historically fell back to software rendering where the CPU emulates GPU operations. Chrome used SwiftShader as a software renderer for WebGL fallback, though as of Chrome 137 in May 2025, automatic software fallback via SwiftShader has been deprecated. Chrome is adopting Windows Advanced Rasterization Platform (WARP) as the replacement software fallback on Windows.
Software rendering has significant limitations:
- Substantially slower performance (CPU sequential processing vs GPU parallel processing)
- May fail to maintain 60fps with complex 3D graphics
- Non-Windows platforms without GPU access may not run WebGL applications at all in newer Chrome versions
Devices without GPU support will either run the application slowly via software rendering or fail to load WebGL content entirely, depending on the browser and platform.
Data Flow
Setup (once): JavaScript creates the LED matrix texture from the canvas pixel data.
Per frame: JavaScript updates one uniform (uScrollOffset).
GPU: Shader processes every pixel in parallel, using the texture and offset.
This is why 60fps is maintained with thousands of LEDs. The GPU handles the heavy computation in parallel while JavaScript only updates a single scrolling value each frame.
Performance Comparison
JavaScript approach (CPU):
Process LED 1 → Process LED 2 → Process LED 3 → ... → Process LED 10000
Time: Sequential, blocking
GPU shader approach:
Process all 10000+ LEDs simultaneously
Time: Parallel, fast
The difference between sequential CPU processing and parallel GPU processing is what makes real-time 3D graphics possible in the browser.
Putting It All Together
The LedMarqueeOrbContainer component handles Canvas setup, lighting, controls, and post-processing:
<Canvas camera={{ position: [0, 0, 5], fov: 50 }}>
<ambientLight intensity={0.3} />
<pointLight position={[10, 10, 10]} intensity={0.5} />
<Environment preset="city" />
<LedMarqueeOrb
word={word}
speed={speed}
dimColor={dimColor}
brightColor={brightColor}
/>
<OrbitControls enablePan={false} minDistance={3} maxDistance={8} />
<EffectComposer>
<Bloom intensity={1.5} luminanceThreshold={0.9} />
</EffectComposer>
</Canvas>The Bloom effect creates the LED glow.
Things That Went Wrong
The Backwards Text Problem
Initially, text appeared backwards due to flipped longitude coordinates. The fix was changing normalizedLon to 1.0 - normalizedLon. Two hours to identify, one character to fix.
Getting the LED Dots Right
Creating circular LED dots instead of squares required using smoothstep for anti-aliased edges and precise distance calculations.
Performance
Initial calculations in JavaScript caused performance issues. Moving computations to the GPU shader (where they run in parallel) resolved this.
Publishing
The build outputs both ES modules and UMD formats with TypeScript declarations. Published to npm as 3d-marquee.
Usage:
import { LedMarqueeOrbContainer } from '3d-marquee'
function App() {
return (
<div style={{ width: '100vw', height: '100vh' }}>
<LedMarqueeOrbContainer
word="Hello World"
speed={0.1}
dimColor="#1a261a"
brightColor="#00ff4d"
/>
</div>
)
}Learning Through Exposure
This project required working with concepts I had no prior experience with—3D graphics, shader programming, WebGL, and spherical coordinate systems. Rather than studying each topic systematically before starting, I dove into building and learned through necessity.
This approach of mass exposure to new topics has some advantages. When you encounter a concept in the context of solving a specific problem, the abstract becomes concrete. Matrix transformations made sense because I needed to map text coordinates. Shaders clicked because I had to render LED dots. The feedback loop is immediate: change code, see result, understand connection.
The tradeoff is that knowledge has gaps. I can write a working shader but couldn’t explain the full GLSL specification. I understand matrix multiplication for my use case but not the underlying linear algebra theory. This is acceptable when the goal is building something functional rather than comprehensive understanding.
Tools like Cursor and AI assistants make this learning style more viable. They can scaffold projects, explain specific concepts on demand, and suggest solutions to immediate problems. This allows focusing on understanding the architecture and logic while delegating syntax lookup and boilerplate generation.
The key insight: sometimes the most efficient path to learning is building something slightly beyond your current ability and filling knowledge gaps as you encounter them.
Links:
The next challenge is figuring out how to render that squirrel.