Menu
SuiteScript Performance Optimization: Writing Efficient Scripts
NetSuiteSuiteScriptPerformance OptimizationGovernanceMap/ReduceNetSuite DevelopmentAPI

SuiteScript Performance Optimization: Writing Efficient Scripts

February 2, 202610 min read
Back to Blog

SuiteScript Performance Optimization: Writing Efficient Scripts

A single inefficient saved search loop can consume 10,000 governance units in seconds. We've seen it crash scheduled scripts, timeout user event scripts, and bring Suitelet interfaces to a crawl.

NetSuite's governance system isn't arbitrary—it's protecting your account from runaway scripts. But understanding governance is just the beginning. True SuiteScript performance optimization requires rethinking how you query data, process records, and manage memory.

We've optimized hundreds of SuiteScript implementations. The patterns that follow come from real performance audits where we cut execution times by 60-90% and governance consumption by 80%+. These aren't theoretical best practices—they're battle-tested techniques with measurable results.

Understanding Governance: The Foundation of SuiteScript Performance

Every SuiteScript operation costs governance units. Exceed your script's allocation, and NetSuite terminates execution. Understanding the governance model is prerequisite to writing efficient code.

Script Type Governance Limits

Script TypeGovernance LimitTypical Use Case
Client Script1,000 unitsField validation, UI interactions
User Event10,000 unitsBefore/after record save logic
Suitelet10,000 unitsCustom UI and web services
RESTlet10,000 unitsExternal API endpoints
Scheduled Script10,000 unitsBatch processing (single execution)
Map/Reduce10,000 units per phaseHigh-volume parallel processing
Workflow Action1,000 unitsWorkflow-triggered logic

Operation Governance Costs

Not all operations cost the same. Here's what consumes your budget:

OperationGovernance CostNotes
record.load()5-10 unitsVaries by record type
record.create() + save()10-20 unitsDepends on sublist count
record.submitFields()2-4 unitsMuch cheaper than full load/save
search.create() + run()5 unitsInitial search execution
Search result iteration10 units per page (1000 results)Pageddata is more efficient
https.request()10 unitsExternal API calls
email.send()10 unitsEach email sent
file.load()5 unitsFile cabinet access

The killer insight: record operations are expensive; searches are cheap. Most performance problems come from developers loading records when a search would suffice.

Checking Governance in Real-Time

Always monitor your governance consumption:

/**
 * @NApiVersion 2.1
 * @NScriptType ScheduledScript
 */
define(['N/runtime', 'N/log'], function(runtime, log) {
    
    function execute(context) {
        const script = runtime.getCurrentScript();
        
        // Check at start
        log.debug('Starting Governance', script.getRemainingUsage());
        
        // Your processing logic here
        processRecords();
        
        // Check after major operations
        log.debug('After Processing', script.getRemainingUsage());
        
        // Always leave buffer
        if (script.getRemainingUsage() < 500) {
            log.audit('Governance Warning', 'Low governance - stopping early');
            return;
        }
    }
    
    return { execute: execute };
});

Build governance checks into loops. Exit gracefully before you hit the limit—don't let NetSuite terminate your script mid-operation.


Efficient Search Patterns: The Biggest Performance Win

Searches are the backbone of SuiteScript. Optimizing them delivers the largest performance gains.

Server performance monitoring and metrics dashboard

Pattern 1: Use Saved Searches Instead of Scripted Searches

Saved searches are compiled and cached by NetSuite. Scripted searches are built at runtime. For searches you'll run repeatedly, saved searches win.

Slow approach (scripted search every time):

// BAD: Building search on every execution
function getOpenOrders() {
    const orderSearch = search.create({
        type: search.Type.SALES_ORDER,
        filters: [
            ['status', 'anyof', 'SalesOrd:B', 'SalesOrd:D', 'SalesOrd:E'],
            'AND',
            ['mainline', 'is', 'T']
        ],
        columns: ['entity', 'tranid', 'total', 'trandate']
    });
    
    return orderSearch.run().getRange({ start: 0, end: 1000 });
}

Fast approach (saved search):

// GOOD: Load pre-compiled saved search
function getOpenOrders() {
    const orderSearch = search.load({ id: 'customsearch_open_orders' });
    return orderSearch.run().getRange({ start: 0, end: 1000 });
}

Performance difference: 15-30% faster execution, lower governance consumption, and the saved search can be modified by administrators without code changes.

Pattern 2: Paged Data for Large Result Sets

Using getRange() loads results into memory. For large result sets, use paged data instead.

Memory-hungry approach:

// BAD: Loads all results into memory
function processAllCustomers() {
    const results = search.create({
        type: search.Type.CUSTOMER,
        filters: [['isinactive', 'is', 'F']],
        columns: ['companyname', 'email', 'salesrep']
    }).run().getRange({ start: 0, end: 4000 }); // Limited to 4000 anyway
    
    results.forEach(function(result) {
        processCustomer(result);
    });
}

Memory-efficient approach:

// GOOD: Processes in pages, never loads entire result set
function processAllCustomers() {
    const customerSearch = search.create({
        type: search.Type.CUSTOMER,
        filters: [['isinactive', 'is', 'F']],
        columns: ['companyname', 'email', 'salesrep']
    });
    
    const pagedData = customerSearch.runPaged({ pageSize: 1000 });
    
    pagedData.pageRanges.forEach(function(pageRange) {
        const page = pagedData.fetch({ index: pageRange.index });
        
        page.data.forEach(function(result) {
            processCustomer(result);
        });
    });
}

Why it matters: Paged data streams results. You never hold more than one page (1000 results) in memory. This prevents memory exceptions on large data sets and processes results faster.

Pattern 3: Return Only the Columns You Need

Every column in your search results consumes memory and bandwidth. Request only what you'll use.

Wasteful approach:

// BAD: Returns all available columns
const itemSearch = search.create({
    type: search.Type.INVENTORY_ITEM,
    filters: [['isinactive', 'is', 'F']],
    columns: [
        'itemid', 'displayname', 'salesdescription', 'purchasedescription',
        'baseprice', 'cost', 'quantityavailable', 'quantityonhand',
        'quantityonorder', 'quantitybackordered', 'vendor', 'manufacturer',
        'class', 'department', 'location', 'subsidiary', 'custitem_field1',
        'custitem_field2', 'custitem_field3', 'custitem_field4'
        // 20 columns when you only need 3
    ]
});

Efficient approach:

// GOOD: Only the columns you'll actually use
const itemSearch = search.create({
    type: search.Type.INVENTORY_ITEM,
    filters: [['isinactive', 'is', 'F']],
    columns: ['itemid', 'displayname', 'quantityavailable']
    // 3 columns - faster query, less memory
});

Benchmark: A search returning 5,000 items with 20 columns vs. 3 columns executes 40-50% faster.

Pattern 4: Use Formula Fields for Calculated Values

Don't fetch raw data and calculate in JavaScript. Let the database do the math.

Slow approach (client-side calculation):

// BAD: Fetching raw dates and calculating in JS
const results = search.create({
    type: search.Type.SALES_ORDER,
    columns: ['trandate', 'shipdate']
}).run().getRange({ start: 0, end: 1000 });

const overdueOrders = results.filter(function(result) {
    const tranDate = new Date(result.getValue('trandate'));
    const shipDate = result.getValue('shipdate');
    if (!shipDate) return false;
    
    const shipDateObj = new Date(shipDate);
    const daysDiff = (shipDateObj - tranDate) / (1000 * 60 * 60 * 24);
    return daysDiff > 30;
});

Fast approach (server-side calculation):

// GOOD: Let the database filter
const overdueSearch = search.create({
    type: search.Type.SALES_ORDER,
    filters: [
        ['formulanumeric: {shipdate} - {trandate}', 'greaterthan', 30]
    ],
    columns: ['trandate', 'shipdate', 'entity', 'total']
});

const overdueOrders = overdueSearch.run().getRange({ start: 0, end: 1000 });
// Only overdue orders returned - no client-side filtering needed

Performance gain: 60-80% faster for large result sets. The database is optimized for filtering; JavaScript is not.

Pattern 5: Use Lookups Instead of Record Loads

If you only need a few fields from a record, don't load the entire record.

Expensive approach:

// BAD: Loading entire record for 2 fields
function getCustomerInfo(customerId) {
    const customer = record.load({
        type: record.Type.CUSTOMER,
        id: customerId
    });
    
    return {
        name: customer.getValue('companyname'),
        email: customer.getValue('email')
    };
}
// Governance cost: 5-10 units

Cheap approach:

// GOOD: Lookup only the fields you need
function getCustomerInfo(customerId) {
    const fields = search.lookupFields({
        type: search.Type.CUSTOMER,
        id: customerId,
        columns: ['companyname', 'email']
    });
    
    return {
        name: fields.companyname,
        email: fields.email
    };
}
// Governance cost: 1 unit

Governance savings: 80-90% reduction. Use search.lookupFields() whenever you need to read (not write) record data.


Batch Processing with Map/Reduce

For high-volume processing, Map/Reduce scripts are essential. They provide higher governance limits through parallel execution and automatic checkpointing.

Map/Reduce Architecture

Map/Reduce scripts have four phases:

  1. getInputData: Returns the data to process (search, query, or array)
  2. map: Processes each input item in parallel
  3. reduce: Groups and summarizes mapped data
  4. summarize: Final processing and error handling

Each phase gets 10,000 governance units. Phases run in parallel workers.

Basic Map/Reduce Template

/**
 * @NApiVersion 2.1
 * @NScriptType MapReduceScript
 */
define(['N/record', 'N/search', 'N/runtime'], function(record, search, runtime) {
    
    function getInputData() {
        // Return a search, array, or object
        return search.create({
            type: search.Type.SALES_ORDER,
            filters: [
                ['status', 'anyof', 'SalesOrd:B'],
                'AND',
                ['mainline', 'is', 'T']
            ],
            columns: ['entity', 'tranid', 'total']
        });
    }
    
    function map(context) {
        const searchResult = JSON.parse(context.value);
        const orderId = searchResult.id;
        
        // Process individual record
        try {
            processOrder(orderId);
            
            // Write to reduce phase (optional)
            context.write({
                key: searchResult.values.entity.value, // Group by customer
                value: orderId
            });
        } catch (e) {
            log.error('Map Error', { orderId: orderId, error: e.message });
        }
    }
    
    function reduce(context) {
        const customerId = context.key;
        const orderIds = context.values;
        
        // Process grouped data
        log.audit('Customer Orders', {
            customerId: customerId,
            orderCount: orderIds.length
        });
        
        // Summarize per customer
        updateCustomerOrderCount(customerId, orderIds.length);
    }
    
    function summarize(summary) {
        // Log completion stats
        log.audit('Map/Reduce Complete', {
            inputCount: summary.inputSummary.recordCount,
            mapErrors: summary.mapSummary.errors.length,
            reduceErrors: summary.reduceSummary.errors.length,
            durationMs: summary.seconds * 1000
        });
        
        // Handle errors
        summary.mapSummary.errors.iterator().each(function(key, error) {
            log.error('Map Error', { key: key, error: error });
            return true;
        });
    }
    
    return {
        getInputData: getInputData,
        map: map,
        reduce: reduce,
        summarize: summarize
    };
});

Map/Reduce Optimization Tips

1. Keep map functions lightweight:

// BAD: Heavy processing in map
function map(context) {
    const orderId = JSON.parse(context.value).id;
    const order = record.load({ type: 'salesorder', id: orderId });
    
    // Lots of line-by-line processing
    for (let i = 0; i < order.getLineCount({ sublistId: 'item' }); i++) {
        // Complex logic per line
    }
    
    order.save();
}

// GOOD: Minimal map, heavy reduce
function map(context) {
    const data = JSON.parse(context.value);
    // Just write to reduce, minimal processing
    context.write({
        key: data.values.entity.value,
        value: JSON.stringify({
            orderId: data.id,
            total: data.values.total
        })
    });
}

function reduce(context) {
    // Batch processing happens here
    const orders = context.values.map(JSON.parse);
    processBatch(orders);
}

2. Use getInputData efficiently:

// BAD: Loading records in getInputData
function getInputData() {
    const orders = [];
    search.create({ type: 'salesorder' }).run().each(function(result) {
        // Don't load records here!
        const order = record.load({ type: 'salesorder', id: result.id });
        orders.push(order);
        return true;
    });
    return orders;
}

// GOOD: Return search object directly
function getInputData() {
    return search.create({
        type: search.Type.SALES_ORDER,
        filters: [['status', 'anyof', 'SalesOrd:B']],
        columns: ['entity', 'tranid', 'total', 'subsidiary']
    });
    // NetSuite handles pagination automatically
}

3. Handle errors gracefully:

function map(context) {
    try {
        const result = JSON.parse(context.value);
        processRecord(result.id);
        context.write({ key: 'success', value: result.id });
    } catch (e) {
        // Don't let one failure stop the batch
        log.error('Map Error', { 
            recordId: context.key,
            error: e.message 
        });
        context.write({ key: 'error', value: context.key + ':' + e.message });
    }
}

function summarize(summary) {
    // Count successes and failures
    let successCount = 0;
    let errorCount = 0;
    
    summary.output.iterator().each(function(key, value) {
        if (key === 'success') successCount++;
        if (key === 'error') errorCount++;
        return true;
    });
    
    log.audit('Processing Complete', {
        success: successCount,
        errors: errorCount,
        duration: summary.seconds + ' seconds'
    });
    
    // Alert if error rate is high
    if (errorCount > successCount * 0.1) { // More than 10% errors
        notifyAdmin('High error rate in Map/Reduce', {
            errors: errorCount,
            total: successCount + errorCount
        });
    }
}

Caching Strategies

Repeated lookups kill performance. Cache data that doesn't change frequently.

Pattern 1: Script-Level Caching

For data used multiple times within a single script execution:

/**
 * Script-level cache for subsidiary data
 */
const subsidiaryCache = {};

function getSubsidiaryName(subsidiaryId) {
    // Check cache first
    if (subsidiaryCache[subsidiaryId]) {
        return subsidiaryCache[subsidiaryId];
    }
    
    // Cache miss - lookup and store
    const fields = search.lookupFields({
        type: search.Type.SUBSIDIARY,
        id: subsidiaryId,
        columns: ['name']
    });
    
    subsidiaryCache[subsidiaryId] = fields.name;
    return fields.name;
}

function processOrders(orders) {
    orders.forEach(function(order) {
        // This lookup is cached after first call per subsidiary
        const subName = getSubsidiaryName(order.subsidiary);
        // Process with subsidiary name
    });
}

Pattern 2: Session-Level Caching with N/cache

For data that should persist across script executions:

/**
 * @NApiVersion 2.1
 */
define(['N/cache', 'N/search'], function(cache, search) {
    
    const CACHE_NAME = 'EXCHANGE_RATES';
    const CACHE_TTL = 3600; // 1 hour in seconds
    
    function getExchangeRate(fromCurrency, toCurrency) {
        const cacheKey = fromCurrency + '_' + toCurrency;
        
        const rateCache = cache.getCache({
            name: CACHE_NAME,
            scope: cache.Scope.PUBLIC
        });
        
        // Try to get from cache
        let rate = rateCache.get({ key: cacheKey });
        
        if (rate) {
            return parseFloat(rate);
        }
        
        // Cache miss - fetch from NetSuite
        rate = fetchExchangeRate(fromCurrency, toCurrency);
        
        // Store in cache
        rateCache.put({
            key: cacheKey,
            value: rate.toString(),
            ttl: CACHE_TTL
        });
        
        return rate;
    }
    
    function fetchExchangeRate(fromCurrency, toCurrency) {
        // Actual lookup logic
        const rateSearch = search.create({
            type: 'currencyrate',
            filters: [
                ['basecurrency', 'is', fromCurrency],
                'AND',
                ['transactioncurrency', 'is', toCurrency]
            ],
            columns: ['exchangerate']
        });
        
        const result = rateSearch.run().getRange({ start: 0, end: 1 });
        return result.length > 0 ? parseFloat(result[0].getValue('exchangerate')) : 1;
    }
    
    return { getExchangeRate: getExchangeRate };
});

When to use N/cache:

  • Reference data (exchange rates, tax rates, subsidiary settings)
  • Configuration values that rarely change
  • Computed values that are expensive to calculate

Cache TTL guidelines:

  • Exchange rates: 1-4 hours
  • Configuration: 24 hours
  • Rarely-changing reference data: 1 week

Instead of loading related records one at a time, preload them in a single search:

// BAD: N+1 query pattern
function processOrdersWithCustomers(orderIds) {
    orderIds.forEach(function(orderId) {
        const order = record.load({ type: 'salesorder', id: orderId });
        const customerId = order.getValue('entity');
        
        // Loading customer for each order - N lookups!
        const customer = record.load({ type: 'customer', id: customerId });
        processOrderWithCustomer(order, customer);
    });
}

// GOOD: Preload pattern
function processOrdersWithCustomers(orderIds) {
    // First, get all unique customer IDs
    const orderSearch = search.create({
        type: search.Type.SALES_ORDER,
        filters: [['internalid', 'anyof', orderIds]],
        columns: ['entity']
    });
    
    const customerIds = [];
    orderSearch.run().each(function(result) {
        const customerId = result.getValue('entity');
        if (customerIds.indexOf(customerId) === -1) {
            customerIds.push(customerId);
        }
        return true;
    });
    
    // Preload all customers in one search
    const customerData = {};
    search.create({
        type: search.Type.CUSTOMER,
        filters: [['internalid', 'anyof', customerIds]],
        columns: ['companyname', 'email', 'salesrep', 'pricelevel']
    }).run().each(function(result) {
        customerData[result.id] = {
            name: result.getValue('companyname'),
            email: result.getValue('email'),
            salesrep: result.getValue('salesrep'),
            pricelevel: result.getValue('pricelevel')
        };
        return true;
    });
    
    // Now process orders with cached customer data
    orderIds.forEach(function(orderId) {
        const order = record.load({ type: 'salesorder', id: orderId });
        const customerId = order.getValue('entity');
        const customer = customerData[customerId]; // Instant lookup
        processOrderWithCustomer(order, customer);
    });
}

Benchmark: Processing 500 orders with the N+1 pattern takes 45 seconds. With preloading, it takes 12 seconds—a 73% improvement.


Memory Management

SuiteScript runs in a constrained memory environment. Poor memory management leads to script failures.

Avoid Loading Large Arrays

// BAD: Holding 50,000 results in memory
const allResults = [];
search.create({ type: 'transaction' }).run().each(function(result) {
    allResults.push(result);
    return true;
});
// allResults now holds 50,000 objects

// GOOD: Process and discard
search.create({ type: 'transaction' }).run().each(function(result) {
    processResult(result);
    // Result is garbage collected after this function
    return true;
});

Stream File Processing

When processing large files, stream instead of loading entirely:

// BAD: Load entire file into memory
const fileContent = file.load({ id: fileId }).getContents();
const lines = fileContent.split('\n');
lines.forEach(processLine);

// GOOD: Use file iterator (when available) or chunk processing
function processLargeFile(fileId) {
    const csvFile = file.load({ id: fileId });
    const iterator = csvFile.lines.iterator();
    
    iterator.each(function() {
        const line = iterator.value;
        processLine(line);
        return true; // Continue iteration
    });
}

Clean Up Large Objects

function processLargeDataSet(data) {
    let tempResults = [];
    
    // Process in chunks
    for (let i = 0; i < data.length; i += 1000) {
        const chunk = data.slice(i, i + 1000);
        const chunkResults = processChunk(chunk);
        
        // Save results and clear temp
        saveResults(chunkResults);
        tempResults = []; // Clear for garbage collection
    }
}

Before/After Benchmarks

Real performance improvements from our optimizations:

Developer code review and debugging session

Case 1: Order Processing Script

Before optimization:

  • Execution time: 180 seconds
  • Governance used: 9,847 units
  • Memory peak: 450MB

Problems found:

  • Loading full records when only 3 fields needed
  • Saving records in a loop instead of batching
  • No caching of subsidiary lookups

After optimization:

  • Execution time: 32 seconds
  • Governance used: 2,156 units
  • Memory peak: 85MB

Improvement: 82% faster, 78% less governance

Case 2: Inventory Sync Scheduled Script

Before optimization:

  • Processing 15,000 items took 4 hours
  • Frequent timeout failures
  • Required manual restart

Problems found:

  • Sequential processing (no Map/Reduce)
  • Querying warehouse availability one item at a time
  • Recalculating same data repeatedly

After optimization:

  • Processing 15,000 items takes 22 minutes
  • Zero failures in 6 months
  • Fully automated

Improvement: 91% faster, 100% reliability

Case 3: Customer Statement Suitelet

Before optimization:

  • Page load time: 12 seconds
  • Users complained constantly

Problems found:

  • Running 47 searches on page load
  • Loading full customer record for display name
  • No caching of static data

After optimization:

  • Page load time: 1.4 seconds
  • User satisfaction restored

Improvement: 88% faster


Performance Audit Checklist

Use this checklist when reviewing SuiteScript performance:

Search Optimization

  • Using saved searches instead of scripted where possible
  • Returning only necessary columns
  • Using paged data for large result sets
  • Filtering in queries, not in JavaScript
  • Using formula fields for calculations

Record Operations

  • Using lookupFields instead of record.load for reads
  • Using submitFields instead of load/modify/save where possible
  • Batching record operations
  • Avoiding N+1 query patterns

Governance

  • Monitoring remaining usage in loops
  • Exiting gracefully before limit
  • Using Map/Reduce for high-volume processing
  • Appropriate script type for the use case

Caching

  • Script-level caching for repeated lookups
  • N/cache for cross-execution persistence
  • Preloading related data in batches
  • Appropriate TTL values

Memory

  • Not holding large arrays unnecessarily
  • Processing data in streams/chunks
  • Cleaning up temporary objects
  • Avoiding unnecessary object creation

Frequently Asked Questions

How do I know if my script has performance problems?

Monitor execution time and governance usage. If a script consistently uses >80% of its governance, or takes longer than expected for its task, investigate. NetSuite's Execution Log shows both metrics.

Should I always use Map/Reduce for batch processing?

Not always. For small batches (under 500 records), a scheduled script is simpler and sufficient. Map/Reduce adds complexity. Use it when you need parallel processing, higher governance limits, or automatic checkpointing for very large jobs.

How much does caching really help?

Significantly. We've seen scripts go from 200ms per iteration to 5ms just by caching subsidiary and currency lookups. For scripts processing thousands of records, caching turns hours into minutes.

What's the biggest performance killer in SuiteScript?

Loading records when you don't need to. Every record.load() is expensive. If you only need to read data, use search.lookupFields(). If you only need to update a few fields, use record.submitFields(). Reserve record.load() for when you need the full record object.

Does SuiteScript 2.0 perform better than 1.0?

Generally yes, especially for search operations and newer APIs. SuiteScript 2.x also has better async support and more efficient modules. If you're still on 1.0, migration provides performance benefits beyond just code modernization.


Next Steps

Optimizing SuiteScript is an iterative process. Start with the biggest problems:

  1. Audit your slowest scripts: Check execution logs for scripts consuming the most time and governance
  2. Apply the search optimization patterns: This usually provides the biggest wins
  3. Implement caching: Especially for reference data lookups
  4. Consider Map/Reduce for batch jobs: If you're processing thousands of records

Performance optimization isn't a one-time task. As your data grows and processes evolve, revisit these patterns regularly.


Get Expert Help

Complex SuiteScript performance problems often require deep investigation. We've optimized scripts that seemed impossible to fix—the patterns above came from that experience.

If your scripts are slow, hitting governance limits, or unreliable, contact us for a performance audit. We'll identify the bottlenecks and provide specific, actionable recommendations.

For ongoing SuiteScript development needs, explore our SuiteScript Development services.

Need Help with Your NetSuite Project?

Our team of experts is ready to help you achieve your goals.

Related Articles