Networking Setup

Proper network configuration is essential for high-quality CloudXR streaming. This guide covers network requirements, configuration, and optimization for CloudXR.js applications.

CloudXR.js operates using a two-tier connection architecture:

Hosts your WebXR application and serves it to client devices.

  • HTTP Mode: npm run dev-server

    • Local: http://localhost:8080/
    • Network: http://<server-ip>:8080/
    • Use case: Local development, trusted networks
  • HTTPS Mode: npm run dev-server:https

    • Local: https://localhost:8080/
    • Network: https://<server-ip>:8080/
    • Use case: Local development and production deployments, remote access

Handles the XR streaming protocol between the client and CloudXR Runtime.

  • Direct Connection: ws://<server-ip>:49100

    • Uses unsecured WebSocket protocol (ws://)
    • No proxy required
    • Suitable for HTTP clients and local networks
  • Proxied Connection: wss://<proxy-ip>:48322

    • Uses WebSocket Secure protocol (wss:// - WebSocket over TLS/SSL)
    • Requires WebSocket proxy with TLS support
    • Required for HTTPS clients (browsers enforce mixed content policy)

Important: When hosting your web application using HTTPS (npm run dev-server:https), you must configure a WebSocket proxy and connect using wss://. Browsers block non-secure WebSocket (ws://) connections from secure (HTTPS) pages due to mixed content security policies.

For detailed client configuration instructions, see the Client Setup Guide.

Metric Recommended Min/Max
Downstream Available Bandwidth >200Mbps >100Mbps
Client-to-Server Ping <20ms <40ms
Downstream/Upstream Jitter 1ms 4ms
Downstream Packet Loss 0% 1%
Wifi Channel 5GHz/6GHz 5GHz
Wifi Channel Width 80Mhz 40Mhz

For optimal performance, use a dedicated network setup:

[CloudXR Server] ←→ [Router] ←→ [Client Device]
(Ethernet) (WiFi 6) (Meta Quest 3)

Key considerations:

  • Use wired connection for the server
  • Dedicate WiFi channel for the client device
  • Minimize network hops between server and client
  • Avoid network congestion from other devices
  • Frequency Band: 5GHz or 6GHz (avoid 2.4GHz)
  • Channel Width: 80MHz or 160MHz
  • Security: WPA3 (WPA2 acceptable)
  • Channel Selection: Use non-overlapping channels
  1. Enable 5GHz/6GHz bands with separate SSIDs
  2. Disable 2.4GHz if possible to avoid interference
  3. Set fixed channels instead of auto-selection
  4. Enable QoS for traffic prioritization
  5. Disable band steering to prevent automatic switching

The CloudXR Runtime attempts to open the required ports on the workstation firewall when started. This requires users to respond to an elevated prompt. If this is not possible, you may need to manually configure ports or disable the firewall entirely. Similarly, the WiFi network should also be configured to allow traffic on these ports.

Here is the list of ports that must be open:

Service Protocol Server Port Description
WebSocket TCP 49100 CloudXR Runtime signaling port
Video Stream UDP 47998-48012 CloudXR Runtime media port
WebSocket Proxy TCP 48322 Default wss:// proxy port (if using HTTPS)
# Allow CloudXR Runtime ports
netsh advfirewall firewall add rule name="CloudXR Signaling" dir=in action=allow protocol=TCP localport=49100
netsh advfirewall firewall add rule name="CloudXR Media" dir=in action=allow protocol=UDP localport=47998-48012

# Allow wss:// proxy port (if using HTTPS)
netsh advfirewall firewall add rule name="CloudXR wss:// Proxy" dir=in action=allow protocol=TCP localport=48322
# Allow CloudXR Runtime ports
sudo ufw allow 49100/tcp
sudo ufw allow 47998:48012/udp

# Allow wss:// proxy port (if using HTTPS)
sudo ufw allow 48322/tcp
# Test bandwidth between server and client
iperf3 -s # On server
iperf3 -c <server-ip> # On client
# Test latency
ping <server-ip>

Use online tools like packetlosstest.com to test packet loss and jitter.

  • Cause: Network congestion, poor WiFi signal, or server overload
  • Solutions:
    • Move closer to router
    • Use 5GHz instead of 2.4GHz
    • Close other bandwidth-intensive applications
    • Check server performance
  • Cause: WiFi interference, poor signal strength, or network congestion
  • Solutions:
    • Change WiFi channel
    • Reduce distance from router
    • Check for interference sources
    • Use wired connection for server
  • Cause: Network instability, server issues, or timeout
  • Solutions:
    • Check network stability
    • Verify server is running
    • Increase timeout values
    • Implement reconnection logic
  • Use WPA3 encryption for WiFi
  • Enable network isolation if needed
  • Use VPN for remote connections
  • Regularly update router firmware

When using HTTPS for your web application (for development or production), you need a WebSocket proxy with TLS support to establish secure connections. We provide example configurations for two common deployment scenarios.

Consider using a WebSocket proxy when:

  • Hosting your web application using HTTPS (npm run dev-server:https)
  • Deploying to production environments with proper SSL certificates
  • Accessing CloudXR from remote networks or the internet

The proxy acts as a secure gateway, providing TLS termination for WebSocket connections between CloudXR.js clients and the CloudXR Runtime.

We provide two example proxy configurations to help you get started:

Deployment Scenario Example Solution Setup Complexity
Local development with HTTP No proxy needed (direct ws:// connection) None
Development/testing with HTTPS Docker HAProxy example Low
Single-server production Docker HAProxy example Low
Kubernetes production nginx Ingress example Medium
Multi-server/enterprise nginx Ingress example Medium-High

This example demonstrates a lightweight WebSocket proxy using HAProxy in a Docker container. It automatically generates self-signed SSL certificates and works well for development and single-server deployments. You can deploy this on either WSL2 in Windows OS or on Linux directly.

1. Create the Dockerfile (Dockerfile.wss.proxy) - Click to expand
FROM haproxy:3.2

# Switch to root user for package installation
USER root

# Install necessary tools
RUN apt-get update && apt-get install -y \
bash \
gettext-base \
openssl \
&& rm -rf /var/lib/apt/lists/*

# Create directory for configuration
RUN mkdir -p /usr/local/etc/haproxy/certs \
&& chown -R haproxy:haproxy /usr/local/etc/haproxy

# Create simple certificate generation script
COPY <<EOF /usr/local/bin/generate-cert.sh
#!/bin/bash
cd /usr/local/etc/haproxy/certs
openssl req -x509 -newkey rsa:2048 -keyout server.key -out server.crt -days 365 -nodes -subj "/CN=localhost" -quiet
# Combine certificate and key into a single file for HAProxy
cat server.crt server.key > server.pem
chown haproxy:haproxy server.key server.crt server.pem
chmod 600 server.key server.pem
chmod 644 server.crt
EOF

RUN chmod +x /usr/local/bin/generate-cert.sh

# Create the HAProxy configuration template file
COPY --chown=haproxy:haproxy <<EOF /usr/local/etc/haproxy/haproxy.cfg.template
global
log stdout local0 info
stats timeout 30s
user haproxy

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=3.2&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
log global
option httplog
option dontlognull
option logasap
timeout connect 5s
timeout client 3600s
timeout server 3600s
# WebSocket tunnel timeout (keep connection alive)
timeout tunnel 3600s

frontend websocket_frontend
log global
bind *:\${PROXY_PORT} \${PROXY_SSL_BIND_OPTIONS}
mode http

# Log connection details
capture request header Host len 32
capture request header Upgrade len 32
capture request header Connection len 32

# Add CORS headers for all responses
http-response set-header Access-Control-Allow-Origin "*"
http-response set-header Access-Control-Allow-Headers "*"
http-response set-header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS"
http-response set-header Access-Control-Expose-Headers "*"

# Handle OPTIONS requests for CORS preflight
http-request return status 200 content-type "text/plain" string "OK" if METH_OPTIONS

default_backend websocket_backend

backend websocket_backend
log global
mode http

# WebSocket support - HAProxy automatically handles Upgrade header
# No special configuration needed for WebSocket protocol upgrade

# Health check configuration:
# - inter: time between checks
# - rise: successful checks to mark as UP
# - fall: failed checks to mark as DOWN
# - on-marked-down shutdown-sessions: close existing sessions when backend goes down
server local_websocket \${BACKEND_HOST}:\${BACKEND_PORT} check inter \${HEALTH_CHECK_INTERVAL} rise \${HEALTH_CHECK_RISE} fall \${HEALTH_CHECK_FALL} on-marked-down shutdown-sessions
EOF

# Create the entrypoint script
COPY <<EOF /entrypoint.sh
#!/bin/bash

# Use default BACKEND_HOST if not set
if [ -z "\${BACKEND_HOST:+x}" ]; then
export BACKEND_HOST=localhost
echo "BACKEND_HOST not set, using default: \${BACKEND_HOST}"
fi

# Use default BACKEND_PORT if not set
if [ -z "\${BACKEND_PORT:+x}" ]; then
export BACKEND_PORT=49100
echo "BACKEND_PORT not set, using default: \${BACKEND_PORT}"
fi

# Use default PROXY_PORT if not set
if [ -z "\${PROXY_PORT:+x}" ]; then
export PROXY_PORT=48322
echo "PROXY_PORT not set, using default: \${PROXY_PORT}"
fi

# Use default health check interval if not set
if [ -z "\${HEALTH_CHECK_INTERVAL:+x}" ]; then
export HEALTH_CHECK_INTERVAL=2s
echo "HEALTH_CHECK_INTERVAL not set, using default: \${HEALTH_CHECK_INTERVAL}"
fi

# Use default health check rise if not set
if [ -z "\${HEALTH_CHECK_RISE:+x}" ]; then
export HEALTH_CHECK_RISE=2
echo "HEALTH_CHECK_RISE not set, using default: \${HEALTH_CHECK_RISE}"
fi

# Use default health check fall if not set
if [ -z "\${HEALTH_CHECK_FALL:+x}" ]; then
export HEALTH_CHECK_FALL=3
echo "HEALTH_CHECK_FALL not set, using default: \${HEALTH_CHECK_FALL}"
fi

echo "Launching WebSocket SSL Proxy:"
echo " Backend Host: \${BACKEND_HOST}"
echo " Backend Port: \${BACKEND_PORT}"
echo " Proxy Port: \${PROXY_PORT}"
echo " Health Check Interval: \${HEALTH_CHECK_INTERVAL}"
echo " Health Check Rise: \${HEALTH_CHECK_RISE}"
echo " Health Check Fall: \${HEALTH_CHECK_FALL}"

# Generate self-signed SSL certificate
/usr/local/bin/generate-cert.sh
export PROXY_SSL_BIND_OPTIONS="ssl crt /usr/local/etc/haproxy/certs/server.pem"
echo "SSL enabled - self-signed certificate generated"

# Process the template and create the final config
envsubst < /usr/local/etc/haproxy/haproxy.cfg.template > /usr/local/etc/haproxy/haproxy.cfg

# Function to handle signals and forward them to HAProxy
handle_signal() {
echo "Received signal, shutting down HAProxy..."
if [ -n "\$HAPROXY_PID" ]; then
kill -TERM "\$HAPROXY_PID" 2>/dev/null
wait "\$HAPROXY_PID"
fi
exit 0
}

# Set up signal handlers
trap handle_signal SIGTERM SIGINT

# Start HAProxy in background and capture PID
echo "Starting HAProxy..."
haproxy -f /usr/local/etc/haproxy/haproxy.cfg &
HAPROXY_PID=\$!

# Wait for HAProxy process
wait "\$HAPROXY_PID"
EOF

RUN chmod +x /entrypoint.sh

# Switch back to haproxy user
USER haproxy

# Set the entrypoint
ENTRYPOINT ["/entrypoint.sh"]
  1. Build the Docker image:
docker build -t websocket-ssl-proxy -f Dockerfile.wss.proxy .
  1. Run the proxy container:
docker run -d --name wss-proxy \
--network host \
-e BACKEND_HOST=localhost \
-e BACKEND_PORT=49100 \
-e PROXY_PORT=48322 \
websocket-ssl-proxy
  1. Verify the proxy is running:
# Check container status
docker ps | grep wss-proxy

# View logs
docker logs wss-proxy

You can customize the proxy behavior using these environment variables:

Variable Default Description
BACKEND_HOST localhost CloudXR Runtime hostname or IP address
BACKEND_PORT 49100 CloudXR Runtime WebSocket port
PROXY_PORT 48322 SSL proxy listening port
HEALTH_CHECK_INTERVAL 2s Time between backend health checks
HEALTH_CHECK_RISE 2 Consecutive successful checks to mark backend UP
HEALTH_CHECK_FALL 3 Consecutive failed checks to mark backend DOWN

If you have your own SSL certificate, you can use it instead of the auto-generated self-signed certificate:

  1. Prepare certificate:

    • Combine your certificate and private key into a single PEM file:
    cat your-cert.crt your-key.key > server.pem
    
  2. Mount certificate into container:

    docker run -d --name wss-proxy \
    --network host \
    -v /path/to/server.pem:/usr/local/etc/haproxy/certs/server.pem:ro \
    -e BACKEND_HOST=localhost \
    -e BACKEND_PORT=49100 \
    -e PROXY_PORT=48322 \
    websocket-ssl-proxy

The proxy continuously monitors the CloudXR Runtime backend:

  • Backend DOWN: Logs show Server websocket_backend/local_websocket is DOWN

    • Expected during CloudXR Runtime startup
    • Proxy automatically reconnects when runtime becomes available
  • Backend UP: Logs show Server websocket_backend/local_websocket is UP

    • Proxy is ready to accept client connections
    • WebSocket connections are forwarded to the runtime

Stop the proxy:

docker stop wss-proxy

Start a stopped proxy:

docker start wss-proxy

Delete the proxy container:

docker stop wss-proxy
docker rm wss-proxy

Important: Each time the container is created or restarted, a new self-signed certificate is generated unless you mount your own certificate (see Using Custom Certificates below). With auto-generated certificates, you will need to re-trust the certificate in your browser by visiting https://<server-ip>:48322/ and accepting the certificate warning. See the Client Setup Guide - Trust SSL Certificates for detailed instructions.

After starting the proxy, you can configure your CloudXR.js client to connect using:

wss://<server-ip>:48322

For client configuration instructions, see the Client Setup Guide - Trust SSL Certificates.

"Connection Refused" errors during startup:

  • This is expected behavior during CloudXR Runtime initialization
  • The proxy will automatically connect when the runtime becomes ready
  • Monitor the logs to see when connection is established: docker logs -f wss-proxy

Certificate trust issues:

  • Ensure your client browser has trusted the self-signed certificate
  • Follow the Client Setup Guide
  • For production deployments, consider using properly signed certificates

Firewall blocking connections:

  • Verify that port 48322 (or your configured PROXY_PORT) is open:
    # Ubuntu/Debian example
    sudo ufw allow 48322/tcp

This example demonstrates an enterprise-grade solution using nginx Ingress Controller on Kubernetes. This configuration supports multiple CloudXR servers, load balancing, and integration with existing Kubernetes infrastructure.

This example assumes you have:

  • Kubernetes cluster with kubectl access
  • nginx Ingress Controller installed
  • Valid TLS certificate and key (tls.crt and tls.key)
  • Familiarity with Kubernetes resource management
kubectl create secret tls my-tls --cert=tls.crt --key=tls.key

The nginx proxy configuration example below handles WebSocket connections by routing /{IP}:{PORT}/{path} to target CloudXR servers.

ConfigMap:

apiVersion: v1
kind: ConfigMap
...
data:
nginx.conf: |
...
http {
...
server {
...
location = /test {
return 200 'WebSocket proxy ready\n';
}
location ~ ^/([0-9.]+)(?::[0-9]+)?(.*)$ {
set $target_ip $1;
set $target_port 49100;
set $request_path $2;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_pass http://$target_ip:$target_port$request_path;
}
}
}
...

The Ingress resource exposes the proxy service externally with:

  • TLS Termination: Handles HTTPS encryption/decryption
  • WebSocket Support: Special annotations for WebSocket protocol handling
  • Path Routing: Routes /* requests to the nginx proxy service
  • SSL Redirect: Automatically redirects HTTP to HTTPS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: "..."
...
spec:
rules:
- host: <https-proxy>
http:
paths:
- backend:
service:
name: <nginx-service>
port:
number: <nginx-port>
path: /
pathType: Prefix
tls:
- hosts:
- <https-proxy>
secretName: my-tls

Once deployed, you can test via

curl -k https://<https-proxy>/test

Refer to the Getting Started Guide and checkout the examples we provide to see how the proxy is used. You could run the example web server on HTTPS and then fill in the proxy URL.

The secure WebSocket connection format in the console log will become:

wss://{https-proxy}/{cloudxr-server-ip}:{port}/{optional-path}