Auto Rig a 3D Character (Browser-Based, No Blender)

By Arron R.13 min read
An auto rig puts a clean humanoid skeleton and skin weights on a static mesh in minutes. Sorceress Auto-Rigging runs in the browser — import OBJ, FBX, or GLB, p

A static 3D character mesh is a sculpture, not a rig. The model has every triangle the engine needs to render the silhouette, but no skeleton inside it, no skin weights binding the geometry to that skeleton, and no way for the engine to play an animation clip. Drop the mesh into Phaser, Three.js, or any other 2D-or-3D engine and it shows up as a frozen statue. The fix is an auto rig — a one-click pipeline that infers a humanoid skeleton from anatomical landmarks, binds every vertex of the mesh to the right bones with the right weights, and produces a skeletal asset ready to drop into any animation system. This guide walks the full browser-based pipeline inside the Sorceress Auto-Rigging tool, the prompt-to-rig shortcut inside 3D Studio when you do not have a mesh yet, and the three failure modes that account for almost every bad rig.

Auto rig pipeline diagram: import OBJ FBX or GLB mesh, place 13 anatomical markers, build the skeleton with auto-weight, export FBX GLB or GLTF for any engine
The four-stage browser-based auto rig workflow inside Sorceress Auto-Rigging. Import a static mesh, place anatomical markers, build the skeleton and skin weights, export to any engine.

The auto rig pipeline at a glance

The whole pipeline collapses to five steps once the mesh is in the browser. Five steps, one tab, no Blender install:

  1. Import the mesh. Drop an OBJ, FBX, or GLB file into the upload zone. The tool parses geometry, normals, and UVs into a working buffer; any existing skeleton in an FBX is ignored, only the static surface is kept.
  2. Place 13 markers. Click 13 anatomical landmarks on the mesh: pelvis, neck, chin, the two shoulders, elbows, wrists, knees, and ankles. Guided mode walks the order and auto-mirror copies each left-side click to the right side, so a confident user is done in about a minute.
  3. Detect fingers (optional). Run the finger-peak detector on each open hand to find five fingertip locations per side. Skip this step for stylised characters where finger animation is not on the budget.
  4. Build skeleton + auto-weight. One click reads the markers, infers the bone hierarchy with the pelvis as root, and dispatches the heat-equilibrium weight solver on the Blender backend. The result is a skeletal mesh with smooth deformation at every joint.
  5. Pose-test and export. Open Pose Mode to drag any joint with full-body inverse kinematics, confirm the rig deforms cleanly, then export as FBX, GLB, or GLTF. The FBX export matches the SK_Mannequin reference skeleton so it imports cleanly into Unreal Engine.

Every step is interactive and undoable, and every step runs in your browser tab without you ever installing Blender, Maya, or any DCC tool. The only step that calls a remote service is the auto-weight step — that one runs against the Sorceress-hosted Blender server because the heat-equilibrium solver Blender ships still produces the cleanest open-source weights at the shoulder, hip, and knee.

What "auto rig" actually means in 2026

An auto rig collapses two hand-authored steps that traditionally take hours of expert time per character. The first step is building a skeleton: a hierarchy of bones with parent-child relationships that defines how the character moves at runtime. The pelvis is the root, the spine attaches above it, the neck attaches above the spine, the head attaches above the neck, and the limbs branch off as separate chains. Skeletal animation — the technique every modern engine uses for character motion — depends on this hierarchy because it is what lets a single bone rotation cascade through the chain.

The second step is skinning: binding every vertex of the mesh to one or more bones with a weight value that controls how the vertex moves when the bone moves. Linear blend skinning is the standard implementation — each vertex carries up to four bone indices and four weight values that sum to one, and at runtime the vertex position is computed as the weighted blend of where each driving bone would put it. Get the weights wrong and the mesh deforms like a stiff cardboard cutout at the joints; get them right and the character moves like a real body.

The auto rig automates both steps. The skeleton is inferred from a small set of anatomical markers placed on the mesh, and the bone hierarchy follows from anatomy. The skin weights are computed by a solver that reads the mesh geometry plus the skeleton and produces the four-bone, four-weight assignment for every vertex. The two solver families that actually work are heat-equilibrium (which simulates heat propagation from each bone through the mesh and reads the equilibrium temperature as the weight) and geodesic distance (which assigns weight by surface distance from each bone). Heat-equilibrium produces visibly better deformation at the shoulder and hip; the Sorceress auto rig uses it as the primary path through a hosted Blender backend, with geodesic as a local fallback.

The 13 anatomical markers that drive the humanoid auto rig: pelvis, neck, chin, shoulder L and R, elbow L and R, wrist L and R, knee L and R, ankle L and R, plus optional hip markers
The 13 required markers (plus 2 optional hip markers) the auto rig uses to infer the humanoid skeleton. Verified against src/lib/rigging/types.ts on May 10, 2026.

Step 1 — Bring in your 3D mesh

Open Auto-Rigging. The page is a three-panel layout: a 3D viewport in the centre, a marker / detection / weights inspector on the left rail, and a log + export panel on the right. The first thing the page does is wait for a mesh.

Three formats are accepted directly:

  • OBJ — the simplest and most portable. The tool parses vertices, faces, and UVs; per-face material assignments are ignored because the auto rig only needs geometry.
  • FBX — the industry-standard skeletal-asset format and what most game engines export. The auto rig parses an FBX even if it already contains a skeleton (the existing rig is ignored, only the static surface is kept).
  • GLB — the binary glTF 2.0 container. This is the export format every Sorceress image-to-3D model uses, which means the typical end-to-end Sorceress pipeline (prompt to image to mesh to rig to export) is GLB through the entire chain.

Drag a file onto the viewport or click the upload button. The mesh appears in wireframe by default with the surface mesh hidden so the markers are easier to place against the silhouette. Two pre-flight checks save time: rotate the camera to confirm the character is in T-pose (arms out horizontal, legs straight, hands open if you want finger detection), and check that the model is front-facing toward the camera. The auto rig defaults to the front-facing assumption for marker placement.

Step 2 — Place the 13 anatomical markers

The 13 markers are the data the auto rig actually needs. Every other parameter is inferred or chosen later. Get the markers right and the rest of the pipeline is essentially free.

The full list, in the order guided mode walks them: chin, neck, pelvis, optional left hip, left shoulder, left elbow, left wrist, left knee, left ankle. That is one side. The right-side markers come for free if the auto-mirror toggle is on, which it is by default. Auto-mirror reflects each left-side marker across the YZ plane of the mesh; for any character that is bilaterally symmetric — almost every humanoid — auto-mirror is correct, fast, and prevents the most common rigging error (asymmetric markers producing a tilted skeleton).

Two interaction details that matter:

  • Centre-snap projects each click to the volumetric centre of the limb at that height rather than to the surface point you clicked. Without it, markers land on the skin and the inferred skeleton sits on the skin rather than down the bone axis. With it, markers land inside the volume where the joint actually is. Leave it on.
  • Guided mode walks the placement order and shows a "9 of 9" progress label, so you do not need to remember the order. Free-placement mode is available for re-doing a specific marker after the initial pass; click the marker name in the inspector to make it the active marker, then click the new position on the mesh.

The interactive viewport supports orbit (right-mouse drag), pan (middle-mouse), and zoom (wheel). Three pre-set views — front, side, top — let you snap to an axis-aligned camera quickly. The pelvis, in particular, is much easier to place from a side view than from the front.

Step 3 — Build the skeleton and auto-weight the skin

Once the 13 markers are placed, the Build Skeleton button on the right rail triggers the actual rigging. Two things happen in sequence:

  1. Skeleton inference reads the markers and constructs the bone hierarchy. The pelvis is the root. Above the pelvis, the spine, neck, and head bones are inferred from the pelvis-neck-chin marker chain. The arms are two-bone chains (upper arm + forearm) on each side, plus a hand bone, plus optional finger bones if the detector found fingers. The legs are similar two-bone chains (thigh + shin) plus a foot bone. The full skeleton is roughly 30 bones for a hand-less humanoid and roughly 70 if fingers are included.
  2. Auto-weight dispatches the mesh + skeleton to the Blender backend, which runs the heat-equilibrium weight solver and returns a per-vertex weight map. The map is a four-bone, four-weight assignment per vertex, suitable for linear blend skinning in any modern engine. The wait is typically 60 to 180 seconds depending on vertex count; progress is reported in the log on the right rail.

The auto-weight result is what makes or breaks the rig. A clean weight map produces smooth shoulder and hip deformation; a noisy one produces collapsing volume at every joint. The Blender heat-equilibrium solver is the highest-quality open-source option and is what the Sorceress backend runs by default. The local geodesic-distance fallback (used when the Blender backend is unavailable) is faster but produces visibly worse weights at the shoulder, hip, and knee — the joints where weight quality matters most.

Once the weights are computed, toggle the show-weights view in the inspector to see the bone influence regions painted across the mesh. Each bone shows a heatmap on the surface where it has any weight; clicking a bone in the skeleton selects it and shows just that bone’s influence. This is the diagnostic view for spotting weight bleed (a bone influencing vertices it should not, like the upper arm dragging the chest when the arm rotates).

Step 4 — Test the rig in pose mode and export

Pose Mode is the live diagnostic for the rig. Toggle it on from the right rail and the viewport switches from a static T-pose to an interactive rig where every joint is draggable. Two interaction modes:

  • Rotate — click a bone to select it, then drag the rotation gizmo. The bone (and every child bone in the chain) rotates around its joint, and the mesh deforms live based on the skin weights. This is the test for weight quality.
  • Grab — drag the end-effector of any chain (wrist, ankle, head) and the chain solves inverse kinematics automatically. Grab the wrist and the elbow + shoulder rotate to follow; grab the ankle and the knee + hip do the same. This is closer to how an animator drives a character at authoring time.

Pose mode tests the same skeletal animation pathway the engine will use at runtime — if the rig deforms cleanly here, it will deform cleanly in any engine that consumes skinned-mesh assets. If you see weight bleed, re-run auto-weight; the second pass usually produces a cleaner solution because some marker positions can be nudged in the inspector without re-doing them on the mesh.

When the rig deforms cleanly, hit Export. Three formats are available:

  • FBX — the format Unreal Engine prefers for skeletal mesh import. The Sorceress export matches the SK_Mannequin reference skeleton so the FBX drops directly into a UE5 Content Browser and selects SK_Mannequin as the parent skeleton automatically. Export size is typically 1 to 5 MB for a 20k-vertex character.
  • GLB — the binary glTF 2.0 container, the standard for web-first 3D and Three.js-based engines. The full skeleton plus skin weights plus geometry are all in one file. FBX and GLB carry the same data; the difference is which engine ecosystem they slot into.
  • Skeleton JSON — a debug-friendly export of just the bone hierarchy and rest-pose transforms, without the mesh. Useful for cross-referencing the rig against a custom animation pipeline.

Skipping the mesh — the prompt-to-rig path in 3D Studio

The Auto-Rigging tool covered above assumes you already have a mesh. If you do not — if you are starting from a prompt or a single image — the cleaner entry point is 3D Studio, which chains the entire character pipeline in one tab. Four steps, all in browser:

  1. GenerateAI Image Gen turns the prompt into a concept character. Reference images keep the character on-model across iterations.
  2. Lift to 3D — one of seven image-to-3D models extracts a textured mesh from the concept image. The output is a GLB.
  3. Auto-rig + weight paint — the same Auto-Rigging tool covered in this guide, but with the mesh already loaded into the viewport from the previous step.
  4. Text-to-animation + export — describe the motion ("a relaxed walk", "a sword slash from right to left", "a slow idle breathing") and the AI text-to-motion engine produces an animation clip on the rig. Export as FBX, GLB, or GLTF.

The reason 3D Studio is the recommended starting point for prompt-driven character work: every step’s output flows into the next without a download/upload round trip. The image lands in image-to-3D as a working buffer; the mesh lands in auto-rig as a working buffer; the rig lands in text-to-animation with the skeleton already loaded. The full image-to-3D model pipeline guide covers the upstream steps in depth; the AI 3D character generator guide covers the prompt-driven entry point.

Non-humanoid creatures — Procedural Walk for multi-legged characters

Spiders, ants, drakes, four-legged beasts, multi-legged drones — anything with more (or fewer) than two legs — falls outside the humanoid auto rig’s domain. The 13-marker anatomy assumes a bipedal silhouette; running the auto rig on a quadruped produces a usable skeleton only if you place the front legs as arms and the rear legs as legs, and even then the proportions need manual cleanup.

The right tool for non-humanoid creatures is Procedural Walk, a separate auto-rigger for multi-legged characters. The pipeline is similar (drop in a mesh, the tool finds and rigs the legs) but the runtime behaviour is fundamentally different. Humanoid rigs are typically driven by baked animation clips. Procedural rigs solve the foot placement live every frame from inverse-kinematics targets, so a creature walking up a staircase plants every foot on the right step automatically without any baked animation per-step. For a beginner introduction to the procedural-rig pattern, the voxel generator guide walks through it on a chunky voxel quadruped, and the Mixamo alternative guide compares the humanoid + multi-leg combination against legacy hosted rigging services.

Three auto rig failure modes: surface-placed markers tilting the skeleton, asymmetric left-right shoulder placement, and clenched fists hiding fingertip peaks, each with the recommended fix
The three failure modes that account for almost every "why does my auto rig look broken" question. Two have one-toggle fixes; the third is a one-step source-mesh re-export.

Common failure modes (and the fixes)

Three failure modes show up in every triage. Knowing each one (and the fix) saves a re-build of the rig:

  • Markers placed on the mesh surface. Cause: centre-snap is off. Effect: the inferred skeleton sits on the skin rather than down the bone axis, and auto-weight then deforms the mesh as if it were skinning a hollow shell — collapsing volume and twisting silhouettes at every joint. Fix: leave centre-snap on (it is on by default), and re-place any visibly-off markers using marker-edit mode rather than re-running the entire guided sequence.
  • Asymmetric left-right marker placement. Cause: auto-mirror is off, or both sides were placed manually with slightly different heights. Effect: the inferred skeleton has a tilted clavicle and tilted hip line; the rig produces visible twist in the torso and limbs at rest. Fix: leave auto-mirror on for any bilaterally-symmetric character. Place only the left side; the right side mirrors automatically.
  • Clenched fists in the source mesh. Cause: the source character was sculpted with closed hands. Effect: the finger-peak detector cannot find five distinct fingertip peaks per hand, and the rig produces no finger bones. Fix: re-export the source mesh with hands open in T-pose, or skip finger detection (the rig still works without finger bones).
  • Auto-weight comes back patchy at the shoulder or hip. Cause: the Blender backend was unavailable and the local geodesic fallback ran instead. Fix: retry auto-weight after a brief wait. The Blender-backed run is labelled "Heat-equilibrium" in the log; the fallback is labelled "Geodesic".

The verification path for any rig that looks wrong is the pose-mode test. Toggle pose mode, drag the wrist of each arm to the chest, drag each ankle forward, drag the head left and right. If the deformation looks clean across all six tests, the rig is good. If any test produces visible weight bleed, re-run auto-weight; if a second auto-weight does not fix it, the markers are the problem. Verified May 10, 2026 against the live tool source in src/app/rigging/page.tsx and src/lib/rigging/types.ts — the 13-marker anatomy, the guided-mode order, the auto-mirror toggle, the centre-snap default, and the FBX/GLB/GLTF export options all match the deployed Auto-Rigging build as of today.

Frequently Asked Questions

What does auto rig actually mean for a 3D character?

An auto rig is the automatic equivalent of two hand-authored steps: building a skeleton inside the mesh and binding every vertex of the mesh to one or more bones with a weight value that controls how the vertex moves when the bone moves. The skeleton is a hierarchy of bones with parent-child relationships — pelvis is the root, spine attaches to pelvis, neck attaches to spine, head attaches to neck, and so on for the limbs. The bind step is called skinning — every vertex carries up to four bone indices and four weight values that sum to one, so when the engine animates the bones at runtime, each vertex deforms as a weighted blend of its driving bones. Doing this by hand in Blender or Maya takes hours per character and requires a working knowledge of joint-level weight painting. An auto rig pipeline does both steps automatically: it infers the skeleton from a small set of anatomical markers placed on the mesh, then computes the skin weights using a heat-equilibrium or geodesic-distance solver. The result is a skeletal mesh ready for animation in any engine, produced in minutes instead of hours, with no Blender install required.

What mesh formats can the Sorceress auto rig accept?

The Auto-Rigging tool accepts OBJ, FBX, and GLB files. OBJ is the simplest of the three and works well for static meshes exported from any DCC tool — Blender, Maya, ZBrush, 3D Studio Max, MagicaVoxel. FBX is the industry-standard skeletal-asset format and is what most game engines export and import; the auto rig parses an FBX-encoded mesh even if it already contains a skeleton (the tool ignores the existing rig and builds its own). GLB is the binary glTF 2.0 container — common from web-first 3D pipelines and from any image-to-3D model that exports glTF. The Sorceress 3D Studio image-to-3D step exports GLB by default, which means the typical Sorceress pipeline (prompt to image to mesh to rig) is GLB through the entire chain. Every output is also exportable as FBX, GLB, or GLTF — the engine of the destination game decides which format the file lands as.

How is the Sorceress auto rig different from Mixamo?

Mixamo is a hosted auto-rigging service Adobe acquired in 2015 and largely stopped developing — it accepts a humanoid FBX, asks you to mark up wrists, elbows, knees, and chin, and returns a rigged FBX. The Sorceress Auto-Rigging tool runs on the same conceptual idea (place a small set of anatomical markers on a humanoid mesh, get a rigged FBX back) but with three practical differences. First, it runs in your browser as part of a larger 3D pipeline — the same tab can generate the source character, lift the image to 3D, auto-rig, then animate by text prompt without leaving the page. Second, it uses 13 required markers (plus 2 optional hip markers) versus Mixamo’s smaller marker set, which lets the rig handle non-standard proportions (chibi heads, long-armed characters, stylised silhouettes) that confuse a fixed-marker rigger. Third, the tool supports non-humanoid creatures through the Procedural Walk pipeline (multi-leg auto-rigging plus IK foot placement), which Mixamo does not offer at all. The honest comparison is in the Mixamo alternative guide; this guide focuses on the rigging workflow itself.

Do I need Blender installed to use the auto rig?

No. The auto rig runs entirely in your browser tab — mesh parsing, marker placement, skeleton building, and the pose-test viewer all execute client-side in JavaScript and Three.js. The auto-weight step calls a remote Blender backend that the Sorceress platform hosts and maintains; you never see Blender, never install it, never deal with Python scripts. Blender is the engine inside the auto-weight server because the heat-equilibrium weight solver Blender ships is the highest-quality open-source option available for skeletal-mesh weight binding. Wrapping it as a service means every Sorceress user gets the production-grade weight quality without owning the toolchain. The fallback when the Blender backend is unavailable is a local geodesic-distance solver in the browser; it is faster but produces visibly worse weights at the shoulder, hip, and knee — the joints where weight quality matters most.

How long does the full auto rig take from upload to export?

On a typical humanoid mesh of about 20,000 vertices, the full pipeline takes between three and six minutes of wall-clock time. Mesh parsing and the marker-placement step are interactive and depend on how fast you click — guided mode walks you through 9 marker placements (the auto-mirror toggle handles the second side automatically), and a confident user finishes that in about 60 seconds. The skeleton-build step is sub-second on the client. The auto-weight step is the longest single operation; it usually takes 60 to 180 seconds on the Blender backend depending on vertex count, with progress reported in the log. Pose-mode testing and export are interactive again. The total budget on a 20k-vertex character is roughly four minutes of active interaction plus the auto-weight wait. On heavier meshes (50k vertices and above), the auto-weight step grows roughly linearly with vertex count; budget closer to 8 to 12 minutes for very dense models.

Can the auto rig handle non-humanoid creatures like spiders or quadrupeds?

The Auto-Rigging tool covered in this guide handles humanoids only — bipedal characters with two arms, two legs, a single head, and the standard 13-marker anatomy. Spiders, quadrupeds, drakes, multi-legged drones, and any creature whose locomotion is not a pelvis-and-two-legs walk cycle is handled by a separate tool: Procedural Walk, at /rigging-multileg, which auto-rigs creatures with arbitrary leg counts and drives them with real-time inverse kinematics so the feet plant naturally on uneven terrain. The pipeline is conceptually similar (drop in a mesh, the tool finds and rigs the legs) but the runtime behaviour is fundamentally different — humanoid rigs are typically driven by baked animation clips, while procedural rigs solve the foot placement live every frame from the IK targets. For a quadruped that needs traditional baked animation rather than procedural locomotion, the workaround is to rig it as a ‘humanoid’ by placing the front legs as arms and the rear legs as legs; the auto rig will produce a usable skeleton, though the proportions will need manual cleanup.

What goes wrong most often when auto rigging a character?

Three failure modes account for almost every bad rig. First, marker placement on the surface of the mesh instead of inside the volume — the auto rig assumes each marker sits at the joint centre, which is inside the limb, not on its skin. Front-facing the model and using the centre-snap toggle (on by default) projects each click to the volumetric centre of the limb at that height; turning centre-snap off and clicking on the mesh surface produces a skeleton where bones sit on the skin rather than down the bone axis, and the weights will then deform the mesh as if it were skinning a hollow shell. Second, asymmetric markers — a left shoulder placed at one height and a right shoulder placed at a slightly different height produces a skeleton with a tilted clavicle and a mesh that twists when the arms move. Auto-mirror solves this; leave it on unless the character is genuinely asymmetric. Third, hands clenched into fists in the source mesh — the finger detector cannot find five distinct fingertip peaks if the hand is closed, and the rig produces a hand with no finger bones. Re-export the source mesh with hands open, or skip finger detection (the rig still works without finger bones, the hands just become single-bone paddles).

Sources

  1. Skeletal animation (Wikipedia)
  2. Inverse kinematics (Wikipedia)
  3. SkinnedMesh (Three.js documentation)
  4. glTF 2.0 specification (Khronos Group)
  5. FBX (Wikipedia)
  6. Linear blend skinning (Wikipedia)
Written by Arron R.·2,865 words·13 min read

Related posts