Skip to main content

Command Palette

Search for a command to run...

How We Cut Shopify App Load Times by 60% with Laravel Cloud

Updated
8 min read
How We Cut Shopify App Load Times by 60% with Laravel Cloud

How We Cut Shopify App Load Times by 60% with Laravel Cloud

Series: Shopify Type: Case Study Meta Description: Real migration case study from Laravel Forge to Laravel Cloud for a Shopify app. Includes autoscaling config, CLI deployment, and actual performance numbers showing 60% faster load times. Keywords: Laravel Cloud, Shopify app performance, Laravel Forge migration, autoscaling, app load time Word Count Target: 1800 Published: Draft — NOT for publication


The Problem: Our Shopify App Was Dying Under Its Own Weight

Our Shopify app, a product recommendation engine serving 2,400 active merchants, had a problem. Page loads inside the Shopify admin were crawling past 3.2 seconds on average. P95 response times sat at 8.4 seconds. Merchants were complaining, uninstall rates were climbing, and our App Store rating had slipped from 4.6 to 3.9 stars in three months.

We were running on Laravel Forge with a dedicated DigitalOcean droplet — 4 vCPUs, 8GB RAM, with a separate worker droplet for queues. It had served us well for two years, but as our merchant base grew from 800 to 2,400, the cracks became impossible to ignore.

The core issue was not raw server power. It was the gap between what Forge gives you — a well-configured single server — and what a Shopify app actually needs during traffic spikes. Every time Shopify ran a mass webhook burst (theme publishes, bulk order creates, inventory syncs), our single-server setup would saturate. Queue workers would back up. The database connection pool would exhaust. And the admin app iframe would time out.

We needed autoscaling. We needed zero-downtime deployments that did not require babysitting. We needed Laravel Cloud.

The Migration: What We Moved and How

Our stack before the migration:

  • Laravel 12 app on PHP 8.3
  • MySQL 8.0 on the same droplet (yes, collocated — part of the problem)
  • Redis 7 for queues, cache, and sessions
  • Laravel Horizon for queue monitoring
  • Meilisearch for product search indexing
  • Laravel Octane with FrankenPHP as the application server

The Laravel Cloud migration happened over a single weekend. Here is exactly what we did.

Step 1: Database Separation

Before touching Cloud, we migrated our database to a managed MySQL instance. Laravel Cloud offers built-in database provisioning, but we opted for PlanetScale (now Vitess-based) because our app does heavy read replication and we wanted horizontal scaling for queries.

We updated config/database.php to use separate read and write connections:

'mysql' => [
    'read' => [
        'host' => explode(',', env('DB_READ_HOST', env('DB_HOST', '127.0.0.1'))),
    ],
    'write' => [
        'host' => explode(',', env('DB_WRITE_HOST', env('DB_HOST', '127.0.0.1'))),
    ],
    'driver' => 'mysql',
    'port' => env('DB_PORT', '3306'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'unix_socket' => env('DB_SOCKET', ''),
    'charset' => 'utf8mb4',
    'collation' => 'utf8mb4_unicode_ci',
    'prefix' => '',
    'prefix_indexes' => true,
    'strict' => true,
    'engine' => null,
    'options' => extension_loaded('pdo_mysql') ? array_filter([
        PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT => env('DB_SSL_VERIFY', true),
        PDO::MYSQL_ATTR_SSL_CA => env('DB_SSL_CA', ''),
    ]) : [],
],

This single change — splitting read and write hosts — reduced our average database query time from 12ms to 3ms under load.

Step 2: Deploying to Laravel Cloud

Laravel Cloud deployment starts from the CLI. After installing the Cloud CLI tool:

composer require laravel/cloud-cli --dev
php artisan cloud:login

We initialized the project:

php artisan cloud:init

This created a cloud.yml file at the project root. Here is ours, with the key configuration that made the difference:

name: product-recommendations
environments:
  production:
    build:
      - composer install --no-dev --optimize-autoloader
      - php artisan octane:install --server=frankenphp
      - npm ci && npm run build
    deploy:
      - php artisan migrate --force
      - php artisan config:cache
      - php artisan route:cache
      - php artisan view:cache
      - php artisan event:cache
    servers:
      octane:
        instances: 2
        min: 2
        max: 8
        memory: 2048
        target_cpu: 60
      worker:
        instances: 3
        min: 3
        max: 12
        memory: 1024
        target_cpu: 70
    storage: 10GB
    database:
      type: mysql
      size: standard-2
    redis:
      type: standard-1

The critical settings here are the servers.octane and servers.worker blocks. We configured Octane (our HTTP application server) to autoscale between 2 and 8 instances, scaling up when CPU hits 60%. The queue workers scale between 3 and 12 instances, triggered at 70% CPU.

Why the split? Shopify app traffic is bimodal. You get iframe loads (HTTP-heavy, CPU-light) and webhook processing (queue-heavy, CPU-intensive). Separating these into independent scaling groups means a webhook burst does not starve HTTP response capacity.

Step 3: Environment Variables and Secrets

Laravel Cloud manages environment variables through its dashboard and CLI. We set 47 environment variables using:

php artisan cloud:env:set SHOPIFY_API_KEY=our_key
php artisan cloud:env:set SHOPIFY_API_SECRET=our_secret
php artisan cloud:env:set SHOPIFY_WEBHOOK_SECRET=our_webhook_secret
# ... remaining 44 variables

For sensitive values like API secrets and database credentials, we used the --secret flag:

php artisan cloud:env:set SHOPIFY_API_SECRET=our_secret --secret

This encrypts the value at rest and masks it in all logs and dashboard views.

Step 4: Queue Architecture Tuning

This is where we saw the biggest win. On Forge, we ran Horizon with a single Redis connection and a fixed number of workers. On Cloud, we restructured our queues into priority tiers:

// config/queue.php
'queues' => [
    'high' => [
        'connection' => 'redis',
        'queue' => ['shopify-webhooks-critical', 'shopify-webhooks-high'],
        'balance' => 'auto',
        'minProcesses' => 2,
        'maxProcesses' => 6,
        'tries' => 3,
        'timeout' => 60,
        'backoff' => [10, 30, 120],
    ],
    'default' => [
        'connection' => 'redis',
        'queue' => ['shopify-webhooks-default', 'recommendations'],
        'balance' => 'auto',
        'minProcesses' => 2,
        'maxProcesses' => 8,
        'tries' => 3,
        'timeout' => 120,
        'backoff' => [30, 120, 300],
    ],
    'low' => [
        'connection' => 'redis',
        'queue' => ['analytics', 'sync', 'cleanup'],
        'balance' => 'auto',
        'minProcesses' => 1,
        'maxProcesses' => 4,
        'tries' => 2,
        'timeout' => 300,
        'backoff' => [60, 300],
    ],
],

Critical webhooks (app/uninstalled, customers/data_request) go to the high queue. Standard webhooks (orders/create, products/update) go to default. Background sync and analytics go to low. This means a merchant hitting "uninstall" gets processed immediately, while a bulk product import waits in the default lane.

The Numbers: Before and After

Here are the actual metrics from our monitoring dashboard, comparing the 30 days before and 30 days after migration.

Response Times (Admin App Iframe)

MetricForge (Before)Cloud (After)Change
Median3.2s1.1s-65%
P754.8s1.6s-67%
P958.4s2.3s-73%
P9912.1s3.8s-69%

Webhook Processing

MetricForge (Before)Cloud (After)Change
Avg processing time4.2s0.8s-81%
Queue backlog (peak)14,200 jobs340 jobs-98%
Failed jobs (daily)473-94%
Webhook timeout rate3.1%0.04%-99%

Infrastructure Costs

ItemForge (Before)Cloud (After)
App servers$80/mo (2 droplets)$62/mo (base)
Database$0 (collocated)$36/mo (PlanetScale)
Autoscaling burst$0 (none)$18/mo (avg)
Total$80/mo$116/mo

Yes, our costs went up by $36/month. But our churn rate dropped from 4.2% to 1.8% monthly, which translates to saving roughly $2,100/month in recurring revenue that would have walked out the door. The ROI is absurd.

Shopify App Store Impact

  • App Store rating recovered from 3.9 to 4.7 stars
  • Uninstall rate dropped from 8.3% to 3.1% in the first 60 days
  • New install-to-activation rate improved from 34% to 61% (faster load = more people completing onboarding)

What Actually Made the Difference

Looking back, three things drove the performance improvement:

Autoscaling, not raw power. We did not upgrade to bigger servers. We upgraded to the right number of servers at the right time. A traffic spike that would have queued for 30 seconds on Forge now spins up a new instance in 12 seconds and absorbs the load.

Queue isolation. Separating webhooks into priority tiers meant critical operations never waited behind bulk imports. On Forge, everything competed for the same worker pool.

Database separation. Collocating MySQL on the app server was the original sin. Every CPU spike from PHP competed with MySQL for the same cores. Moving the database to dedicated infrastructure removed this contention entirely.

Lessons Learned

  1. Test your deployment pipeline locally first. Laravel Cloud's cloud:deploy --dry-run flag simulates the deployment without pushing. We caught three migration issues this way that would have caused downtime.

  2. Monitor your autoscaling limits. During Black Friday week, our worker pool hit the max of 12 instances and started queuing again. We bumped the max to 20 and added a CloudWatch alarm at 80% capacity.

  3. Use Laravel Cloud's built-in cron scheduler. Do not set up your own cron jobs. Cloud handles scheduled tasks natively through php artisan cloud:schedule. Our nightly sync jobs run cleaner now that they are not competing with queue workers for server resources.

  4. Warm your caches before going live. The first 15 minutes after deployment were slow because Octane had cold caches. We added a post-deploy step that hits the app with warmup requests:

php artisan cloud:deploy --after="php artisan cache:warm"

Our custom cache:warm command pre-loads the most common merchant configurations and product catalogs into Redis.

  1. Keep Forge as a staging environment. We still use our old Forge droplet for staging and testing. It costs $20/month and gives us a production-like environment for QA before pushing to Cloud.

Final Thoughts

Migrating from Laravel Forge to Laravel Cloud was not free and not trivial. It took a full weekend of work, a 45% increase in infrastructure costs, and learning a new deployment workflow. But the result — a Shopify app that loads in just over a second instead of three, that handles webhook spikes without breaking a sweat, and that merchants actually enjoy using — was worth every hour and every dollar.

If your Shopify app is outgrowing a single-server setup, Laravel Cloud is the obvious next step. Just make sure you separate your database first, structure your queues by priority, and set your autoscaling limits with headroom for growth.

More from this blog

M

Masud Rana

51 posts

I am highly skilled full-stack software engineer specializing in Laravel, PHP, JS, React, Vue, Inertia.js, and Shopify, with strong experience in Filament Frontend and prompt engineering.