
AR/VR Workspace
SkillSkill
Hand tracking, gesture detection, SLAM mapping, spatial audio, AR overlay generation.
About
name: nepa-arvr-workspace description: AR/VR processing workspace for the NEPA AI platform. Handles hand tracking (MediaPipe), gesture detection, SLAM mapping, spatial audio positioning, and AR overlay generation. Use when building AR/VR features, tracking hands/gestures, or processing spatial video in NEPA AI.
AR/VR Workspace
813 lines — 25 methods. Complete AR/VR scene building toolkit.
Prerequisites
pip install opencv-python mediapipe numpy torch transformers pillow pydub scipy open3d
Source
/home/billk/projects/nepa-ai-monorepo.BAK/vscode_forked/nepa-ai-backend/arvr_workspace.py
Setup
cd ~/projects/nepa-ai-monorepo.BAK/vscode_forked/nepa-ai-backend
python run_nepa.py
Key Methods
| Method | Description |
|--------|-------------|
| estimate_depth(image) | MiDaS/DPT depth estimation |
| track_hands(video) | MediaPipe hand landmark tracking |
| detect_gesture(landmarks) | Classify 25+ hand gestures |
| create_slam_map(video) | SLAM mapping from monocular video |
| detect_planes(video) | Horizontal/vertical plane detection |
| generate_ar_overlay(image, objects) | AR object overlay |
| position_spatial_audio(scene, source) | 3D spatial audio placement |
| build_xr_scene(components) | Assemble full XR scene |
| export_scene(scene, format) | Export to WebXR/Unity/Unreal |
Example Usage
import asyncio
from arvr_workspace import ARVRWorkspace
async def main():
ws = ARVRWorkspace()
# Depth map for AR placement
depth = await ws.estimate_depth("scene.jpg")
# Hand tracking for gesture control
result = await ws.track_hands("hand_video.mp4")
gesture = await ws.detect_gesture(result["landmarks"])
print(f"Detected gesture: {gesture['gesture']}")
# SLAM map for spatial awareness
slam = await ws.create_slam_map("walkthrough.mp4")
asyncio.run(main())
API Endpoints
POST /api/arvr/depth-estimate
POST /api/arvr/track-hands
POST /api/arvr/detect-gesture
POST /api/arvr/slam-map
POST /api/arvr/ar-overlay
POST /api/arvr/build-scene
POST /api/arvr/export
AXON Store
Available at: https://axon.nepa-ai.com (arvr-workspace, $97)
Download: /downloads/arvr-workspace.zip
Full version: https://axon.nepa-ai.com/products
Core Capabilities
- Hand tracking (MediaPipe)
- Gesture detection
- SLAM mapping
- Spatial audio
- AR overlay generation
Customer ratings
0 reviews
No ratings yet
- 5 star0
- 4 star0
- 3 star0
- 2 star0
- 1 star0
No reviews yet. Be the first buyer to share feedback.
Version History
This skill is actively maintained.
April 7, 2026
Initial release
One-time purchase
$97
By continuing, you agree to the Buyer Terms of Service.
Creator
Axon Modal
Creator
Builder of AI-powered automation tools for creators, developers, and businesses. NEPA AI ships production-grade OpenClaw workspaces covering video, audio, image, design, code, 3D, animation, and more — each one a real agentic tool backed by C++ processing and local AI models. Based in Northeastern Pennsylvania. Building the future of creative automation one workspace at a time.
View creator profile →Details
- Type
- Skill
- Category
- Engineering
- Price
- $97
- Version
- 1
- License
- One-time purchase
Works With
Works with OpenClaw, Claude Projects, Custom GPTs, Cursor and other instruction-friendly AI tools.