The legend W. Edwards Deming said it best – “You Can’t Manage What You Don’t Measure”. When it comes to booking retail power deals it’s always good do so with the end measurements in mind. Without the proper organization of costs and the ability to allocate costs back to contracts on a per-kWh basis (or per kW if the cost is demand-driven), then the business can’t answer the fundamental questions it needs to be profitable.

  • When pricing a contract, how do we make sure the pricing estimates are both complete and accurate?
  • When servicing a contract, how do we identify events which could have a negative financial impact on the actuals and make adjustments to reduce or eliminate that risks?
  • How can we use past performance to help us identify ways to improve?

Let’s dive into a few particulars of the retail power industry to better understand how to organize the data. As with any industry, there are two types of associated costs: fixed and variable costs. The fixed costs are the obvious ones, such as rent or hardware. However, the variable costs have some different flavors that have correlations to how the power physically works. First, some assumptions needs to be verified. Energy delivery is componentized, and the components are ISO specific. There are themes across most ISOs, such as Energy or Transportation, but there are also ISO-specific components, such as Capacity, which need to be accounted for in some ISOs, but not others. For the sake of simplicity, we are going to assume that each ISO will be modeled independently, as trying to merge all ISOs into a single model doesn’t provide much benefit, it terms of data usefulness.

Next, whenever costs are discussed, it’s rarely a simple cost, like $.03 per kWh. Most usage charge estimates are modeled as an hourly price curve meaning the costs associated with a particular component of the energy are unique to the hour in which the costs happened, and can be different for every hour. Modeling a single usage cost for an entire year, with hourly granularity, creates 8,760 possible values. Some shops summarize at higher levels, such as on-peak/off-peak values per day, but that can lead to issues as spikes in a particular hour can be lost. When data is summarized, data granularity is compromised. However, keeping hourly granularity also has its costs due to the data management required. If a power product is broken down into 30 different components with hourly granularity, there is now 262,800 individual price points to deal with and doing multiplication for 8,760 estimated volumes across the 262,800 price points may have a performance impact.

The complications really begin when the model starts to enter into the actual power delivery. When pricing a contract, we are taking each one of the pricing components, multiplying those price estimates by the associated hourly measure (usage or demand), and coming out with the total estimated costs for delivery 2016-02-22(adding in fixed costs, expected profit, etc). The problem happens when the power is actually delivered because the granularity is lost. The load scheduled and subsequent invoice amounts for each of these components can come back in a aggregated fashion, which means I will need to break them down to get back to the same level of granularity used during the estimations to get actual feedback on how accurate the estimates were. If I have 12,000 contracts that represented 4 mWh of usage at 9:00 AM on Feb 22, 2016, I will want to break those invoice amounts to a per-kWh cost, and apply those costs back to each contract based on the individual meters actual usage (load ratio allocation). For simplicity sake, we won’t discuss specific schedules for individual large C&I contracts muddled into the measurements, or how some of the costs coming back may not be applicable to all contracts.

Now that some of the complications are known, let’s talk about some of the best practices to keep in mind when modeling retail power.

  • Loss of Granularity – whenever there is an aggregation, you are making a choice to possibly miss the ability to make adjustments in operations. To provide another example of aggregation, some shops will try to shrink the model by aggregating multiple components together in a single representation, such as a blanket “Ancillary Services” bucket or “Energy” category. If you price at a bucket level, the only thing you can compare is the buckets so if the “Ancillary Services” charge all of the sudden comes back double what the estimate is, there may be a lot of work in store for you if you want to determine what caused the issue.
  • Separation of Load – When possible, try to tag and categorize load as much as possible. At a bare minimum, every contract/meter can be tagged with its corresponding ISO, Utility, State, LoadZone and CapacityZone (if NYISO). From there, shops can continue to break down the load into smaller groupings but it’s usually unique to the organization. There are usually some breakdown between Residential/SMB contracts and Large contracts, because Resi/SMB tend to be a fixed price deal where larger contracts might fix or passthrough associated charges. The last tagging possibility is at the scheduling entity level, where REPs can schedule their loads by different scheduling entities or legal entities, resulting in multiple ISO invoices coming back related to each separate entity.
  • Flexibility – the model and all systems using the model need to be able to make adjustments without little cycle time. There are some cases where new, significant charges come up (we didn’t model that because it’s usually not relevant) or changes to how the charges are allocated. If a new customer wants to be able to price at a component level that you currently don’t support but your competitors do, then you will be forced to pass on the deal or make it work, making flexible models very important to larger deals.