React Native Just Entered the VR Arena
For years, building a VR app meant choosing between Unity and Unreal Engine. Both are powerful, both require specialized engineers who think in C# or C++, and both come with licensing models that can gut a startup's runway. That era is ending.
Meta's official support for React Native on Quest means your team can write JSX components, manage state with hooks, and deploy to a standalone VR headset using the same Expo workflow you already use for iOS and Android. The implications are massive: any company with a React Native codebase can now target spatial computing without hiring a separate game engine team.
This guide covers everything you need to go from zero to a shipped Meta Quest app. We will walk through the VR component ecosystem, 3D rendering with React Three Fiber, input handling for both controllers and hand tracking, spatial UI patterns that actually work on a headset, and the performance constraints you need to respect on standalone hardware. If you have built a React Native mobile app before, you already have 70% of the skills you need.
Why React Native on Quest Changes the Equation
Let us be direct about why this matters. The VR development talent pool has been tiny and expensive. Unity developers with Quest experience command $180K to $250K in the US. Finding them takes months. Meanwhile, there are millions of JavaScript and TypeScript developers worldwide, and hundreds of thousands of them have shipped production React Native apps.
Meta recognized this bottleneck. Their investment in React Native for Quest is not altruistic. They need more apps on the Quest platform to justify hardware sales, and the fastest path to more apps is lowering the barrier to entry. Here is what the official support includes:
- Expo plugin for Quest: A managed plugin that handles all the Android-based Quest configuration, splash screens, and build signing. Run
npx expo prebuildand you get a Quest-ready project. - React Native XR: A first-party library providing spatial primitives:
XRSpace,XRPanel,XRController, andXRHandcomponents that map directly to the OpenXR runtime on Quest. - Metro bundler support: Hot reload works over USB and Wi-Fi, so you can iterate on VR scenes without removing the headset and rebuilding.
- Hermes engine optimizations: Meta tuned Hermes specifically for Quest's Snapdragon XR2 Gen 2 chipset, including garbage collection pauses that stay under the 11ms frame budget at 90Hz.
The practical outcome is that a team of three React Native engineers can prototype a Quest app in two to three weeks. Compare that to the two to three months a similarly sized Unity team would need to reach the same milestone, accounting for the learning curve and tooling differences.
That said, React Native on Quest is not a replacement for Unity or Unreal in every scenario. If you are building a graphically intensive game with complex physics, skeletal animation, and hundreds of dynamic objects, a game engine is still the right tool. React Native shines for spatial productivity apps, data visualization, social experiences, training simulations, and any VR project where UI and interaction design matter more than raw polygon throughput.
Setting Up Your Quest Development Environment
Getting started is straightforward if you have an existing Expo workflow. Here is the step-by-step setup that we use at Kanopy for client projects targeting Meta Quest 3 and Quest 3S.
Prerequisites
- Node.js 20+ and the latest Expo CLI
- A Meta Quest 3 or 3S headset with Developer Mode enabled
- Android Studio with the Quest-specific SDK packages (API Level 32+, Quest OpenXR Loader)
- A Meta developer account with your device registered for sideloading
Project Initialization
Start with a standard Expo project and add the Quest plugin:
- Run
npx create-expo-app my-quest-app --template expo-template-questto scaffold the project with VR defaults. - Install the core spatial packages:
@react-native-xr/core,@react-native-xr/hands, and@react-native-xr/controllers. - Add Three.js and React Three Fiber for 3D scene rendering:
three,@react-three/fiber, and@react-three/xr. - Configure
app.jsonwith the Quest plugin, specifying your target refresh rate (72Hz, 90Hz, or 120Hz) and passthrough permissions if you need mixed reality.
Build and Deploy
Use EAS Build for cloud builds or run npx expo run:android --device for local builds deployed directly to your connected headset. The first build takes 8 to 12 minutes. Subsequent incremental builds drop to under 2 minutes thanks to Metro's caching.
One detail that trips up newcomers: Quest runs a modified version of Android, not standard Android. The Expo Quest plugin handles most of the differences, but if you are using third-party native modules, you may need to verify Quest compatibility. Libraries that depend on standard Android views or activities will not work in immersive mode. Stick to libraries that operate at the JavaScript layer or that have explicit Quest support.
3D Scene Rendering with React Three Fiber
React Three Fiber (R3F) is the bridge between React's declarative component model and Three.js's 3D rendering engine. On Quest, R3F renders directly to the headset's OpenXR swapchain, giving you a full stereoscopic 3D scene managed entirely through JSX.
If you have used R3F on the web, the mental model is identical. You write <Canvas>, drop in <mesh> components, attach materials and geometries, and let React handle the reconciliation. The key difference on Quest is that the camera is controlled by the headset's 6DoF tracking, not by mouse or touch input.
Scene Architecture Best Practices
- Keep your scene graph shallow. Every node in the Three.js scene graph costs traversal time per frame. On Quest's mobile chipset, deep nesting of groups and objects can push you past the frame budget. Aim for a maximum depth of 5 to 7 levels.
- Use instanced meshes for repeated geometry. If you are rendering 50 identical cubes,
InstancedMeshbatches them into a single draw call. This alone can save 3 to 5ms per frame in complex scenes. - Prefer
dreihelpers. The@react-three/dreilibrary provides pre-built components for text rendering (Text3D,Billboard), environment maps, and spatial audio. These are optimized and save you from reinventing common patterns. - Manage assets with Suspense. Wrap model loading in React Suspense boundaries. This lets you show a loading indicator in VR while GLTF models download, preventing the jarring experience of an empty scene.
Lighting and Materials
On mobile VR hardware, real-time lighting is expensive. Here is what works:
- Use baked lighting wherever possible. Pre-compute lightmaps in Blender and include them in your GLTF exports.
- Limit yourself to one dynamic directional light and at most two point lights. Every additional light multiplies fragment shader cost.
- Prefer
MeshStandardMaterialoverMeshPhysicalMaterial. The physical material adds subsurface scattering and clearcoat calculations that are too expensive for Quest's GPU. - Use texture atlases to reduce material switches. Each unique material triggers a separate draw call.
A well-optimized R3F scene on Quest 3 can handle 100K to 200K triangles at 90fps. That is more than enough for productivity apps, data visualizations, and stylized environments. For photorealistic scenes with millions of polygons, you will need a game engine with LOD systems and occlusion culling built in.
Hand Tracking and Controller Input
Input is where VR development diverges most sharply from mobile development. On a phone, you have taps and swipes. On Quest, you have two tracked controllers with buttons, thumbsticks, and triggers, plus full skeletal hand tracking with 26 joints per hand.
Controller Input
The @react-native-xr/controllers library exposes controller state as React hooks:
useController('left')anduseController('right')return position, rotation, button states, and thumbstick values at 72Hz or higher.- For pointer interactions (selecting UI elements, grabbing objects), use the
XRRaycomponent to cast a ray from the controller and detect intersections with your 3D scene. - Haptic feedback is available via
controller.hapticActuators[0].pulse(intensity, duration). Use short pulses (20 to 50ms) for button confirmations and longer pulses (100 to 200ms) for grab/release events.
Hand Tracking
Hand tracking on Quest 3 is remarkably accurate. The @react-native-xr/hands library gives you access to the full hand skeleton:
useHand('left')returns an array of 26 joint positions and rotations, updated every frame.- Built-in gesture detection covers pinch, grab, point, and open palm. You can compose custom gestures by checking joint distances and angles.
- The
XRHandModelcomponent renders a visual hand mesh that follows the tracked skeleton. You can swap in custom hand models (robot hands, gloved hands) for branded experiences.
A practical tip: always support both controllers and hand tracking in your app. Users switch between them constantly. The Quest runtime can detect which input method is active, so you can conditionally render controller models or hand models based on the current input source.
For apps migrating from the old React Native architecture, the new JSI-based input pipeline is critical here. Controller and hand tracking data arrives at high frequency, and the old async bridge would have introduced 2 to 3 frames of input latency. With JSI, input data reaches your JavaScript code within 1ms of the tracking system producing it.
Spatial UI Design Patterns That Work
This is where most first-time VR developers stumble. You cannot simply take a 2D mobile UI and place it on a floating panel in 3D space. Spatial UI has its own rules, and violating them causes eye strain, nausea, and confusion.
Panel Placement and Distance
- Place primary UI panels at 1.2 to 2.0 meters from the user. Closer than 1 meter causes convergence-accommodation conflict (your eyes try to focus at the panel distance while converging at a different depth). This causes headaches within minutes.
- Use curved panels for wide interfaces. A flat panel wider than 60 degrees of the user's field of view forces uncomfortable head rotation. Curving the panel at a 2-meter radius keeps all content at a comfortable reading distance.
- Anchor panels to the world, not to the user's head. Head-locked UI (like a HUD in a first-person shooter) causes nausea in VR. World-locked panels that the user can reposition are the standard pattern.
Typography and Readability
- Quest 3's display resolution is 2064x2208 per eye, roughly 25 pixels per degree. That is significantly lower than a phone screen. Use a minimum font size equivalent to 24px in your spatial panels.
- High-contrast text on solid backgrounds is essential. Avoid text overlaid on complex 3D scenes or passthrough video without a backing panel.
- Sans-serif fonts with generous letter spacing render more clearly on Quest's lenticular displays. Inter and Roboto both work well.
Interaction Feedback
- Every interactive element needs hover, active, and selected states. In VR, there is no cursor on screen by default, so visual feedback on the target element itself is the only way users know what they are pointing at.
- Use spatial audio cues for button presses and state changes. A subtle click sound anchored to the button's 3D position reinforces the sense of physical interaction.
- Scale interactive elements up slightly (5 to 10%) on hover. This provides depth cue feedback that flat color changes alone cannot achieve.
Meta's Spatial Design Guidelines document (updated February 2026) is the best reference for these patterns. We follow it on every Quest project and supplement it with our own testing on actual hardware. If you are designing for VR without wearing the headset daily, you are guessing. Stop guessing. Put the headset on.
Performance Optimization for Standalone VR
Quest 3 runs on a Snapdragon XR2 Gen 2. It is impressive for a standalone headset, but it is not a gaming PC. You have a strict thermal envelope, a fixed GPU budget, and a frame time ceiling that you cannot exceed without causing visible judder and user discomfort.
Frame Budget Math
- At 90Hz (the recommended refresh rate for most apps), each frame must complete in 11.1ms. That includes JavaScript execution, Three.js scene traversal, GPU rendering, and compositor overhead.
- In practice, aim for 8ms or less. The remaining 3ms is your safety margin for garbage collection pauses, background tasks, and thermal throttling that kicks in after 15 to 20 minutes of sustained load.
- At 120Hz (available on Quest 3 for lightweight apps), the budget drops to 8.3ms. Only target 120Hz if your scene is simple: fewer than 50K triangles, minimal transparency, no real-time shadows.
JavaScript Performance
- Avoid allocations in the render loop. Creating new objects, arrays, or closures every frame triggers GC pauses. Pre-allocate vectors and reuse them. Use
useRefto persist mutable values across frames without triggering re-renders. - Move heavy computation to worklets. React Native Reanimated's worklet system runs code on the UI thread, bypassing the JS thread entirely. Use worklets for animation calculations, physics updates, and input smoothing.
- Profile with Hermes's sampling profiler. The
--profileflag on Hermes generates flame charts that show exactly where your JavaScript time is going. We have found that JSON parsing and deep object spreads are the most common culprits in Quest apps.
GPU Performance
- Use the Oculus Performance HUD (available in Quest developer settings) to monitor GPU utilization, frame timing, and thermal state in real time.
- Reduce overdraw by sorting transparent objects front-to-back and using alpha testing instead of alpha blending where possible.
- Compress textures to ASTC format (the native format for Quest's Adreno GPU). Uncompressed RGBA textures consume 4x the memory and bandwidth.
- Implement fixed foveated rendering via the Quest SDK. This reduces pixel shading workload by 20 to 30% by rendering the peripheral vision area at lower resolution.
Performance on Quest is not optional. On a phone, a dropped frame causes a brief stutter. In VR, dropped frames cause nausea, disorientation, and a terrible user experience. Treat your frame budget as a hard constraint, not a target.
React Native vs Unity vs Unreal for Quest Development
Every team considering React Native for Quest asks the same question: should we just use Unity instead? Here is an honest comparison based on projects we have delivered and evaluated.
React Native + R3F
- Best for: Spatial productivity apps, data dashboards, social and communication tools, training and onboarding experiences, content viewers, and mixed reality overlays.
- Strengths: Fast iteration with hot reload, massive talent pool, code sharing with mobile and web, familiar React component model, lightweight runtime.
- Weaknesses: Limited to ~200K triangles at 90fps, no built-in physics engine (you can add Cannon.js or Rapier, but they run on the JS thread), no visual scene editor comparable to Unity's.
- Team cost: 2 to 3 React Native engineers at $150K to $180K each.
Unity
- Best for: Games, complex simulations, photorealistic environments, apps with heavy physics and particle systems.
- Strengths: Mature VR ecosystem, visual editor, built-in physics (PhysX), asset store with thousands of VR-ready models, occlusion culling, LOD systems, baked GI.
- Weaknesses: C# developers are harder to find than JS developers, runtime licensing fees apply above revenue thresholds, iteration speed is slower (no hot reload in VR mode), and sharing code with web/mobile requires a separate codebase.
- Team cost: 2 to 3 Unity VR engineers at $180K to $250K each, plus a 3D artist at $100K to $150K.
Unreal Engine
- Best for: AAA-quality VR experiences, architectural visualization, cinematic VR, and projects where visual fidelity is the top priority.
- Strengths: Best-in-class rendering pipeline (Nanite, Lumen), Blueprint visual scripting lowers the code barrier, massive asset marketplace.
- Weaknesses: C++ development is slow and error-prone, Unreal's Quest support requires aggressive optimization to hit frame targets, very small developer pool, 5% royalty above $1M revenue.
- Team cost: 2 to 4 Unreal engineers at $200K to $280K each, plus environment artists.
For the types of VR apps most businesses actually need (not games), React Native is the pragmatic choice. You get faster development, cheaper teams, and code reuse across your existing mobile and web products. Reserve Unity or Unreal for projects where visual complexity genuinely requires a full game engine.
Realistic Project Timelines and What to Expect
Here is what a typical React Native Quest project looks like from kickoff to submission on the Meta Quest Store, based on our experience delivering spatial apps for clients across healthcare, enterprise training, and consumer products.
Phase 1: Discovery and Prototyping (2 to 3 weeks)
- Define the core VR interaction loop. What does the user do in the headset, and why is VR better than a flat screen for this task?
- Build a functional prototype with placeholder assets. Focus on input handling, panel layout, and basic scene rendering.
- Test on actual Quest hardware from day one. Simulator testing is nearly useless for VR because comfort, readability, and interaction feel can only be evaluated on the headset.
Phase 2: Core Development (6 to 8 weeks)
- Build out the full feature set: 3D scenes, data integrations, user authentication, hand tracking interactions, spatial UI panels.
- Implement performance monitoring from the start. Integrate frame timing checks and automated performance regression tests into your CI pipeline.
- Conduct weekly user testing sessions with 3 to 5 people wearing the headset. VR usability issues compound fast if you wait until the end to test.
Phase 3: Polish and Optimization (2 to 3 weeks)
- Optimize for thermal sustainability. Your app needs to run for 30+ minutes without the headset throttling performance.
- Add comfort features: vignette during locomotion, re-centering options, adjustable panel distances for different IPDs.
- Finalize visual assets, spatial audio, and haptic feedback.
Phase 4: Quest Store Submission (1 to 2 weeks)
- Meta's Quest Store review process takes 5 to 10 business days. They test for performance (must maintain 90fps), comfort (no forced locomotion without options), and content policy compliance.
- Prepare store listing assets: 2D screenshots, 360-degree preview videos, and a written description that explains why the app benefits from VR.
- If you are targeting enterprise distribution via Meta Quest for Business, the review process is faster (2 to 5 days) with fewer content restrictions.
Total timeline: 11 to 16 weeks from kickoff to live on the Quest Store. For comparison, a similar-scope Unity project typically takes 16 to 24 weeks, largely because the iteration speed is slower and the team ramp-up takes longer.
If you are considering building a VR experience alongside an existing mobile app, the code sharing between React Native mobile and React Native Quest can reduce the total effort by 25 to 35%. Your API clients, state management, authentication logic, and business rules transfer directly. That is a cost advantage that no game engine can match.
We have helped teams ship spatial computing apps across Meta Quest and Apple Vision Pro. Whether you are exploring VR for the first time or migrating an existing Unity project to React Native, we can help you move fast without the overhead of a game engine team. Book a free strategy call and let us scope your Quest project together.
Need help building this?
Our team has launched 50+ products for startups and ambitious brands. Let's talk about your project.