Stephen Nimmo

Energy Trading, Risk Management and Software Development

Category: Energy Trading (page 1 of 2)

Let’s scrape ERCOT for some DAM data

One of the largest sources of complexity for power retailers, traders or anyone involved in the sourcing or delivery of power is pricing. Unlike the crude, refined products and NG markets, power prices are highly volatile and updated almost constantly. If you are responsible for delivering load on the grid, and have not fully sourced or hedged your price exposure, these swings can literally put you out of business.

Ercot_logo

To obtain the real time and day ahead power prices, you have two choices – you can find a data provider to do all the work or you can scrape the prices yourself. Web scraping is a delicate art, full of gotchas and issues as you are effectively building to a data endpoint not designed for being a data endpoint. Changes to web interfaces occur with little or no forewarning and even small adjustments to HTML structures, tag names or ordering can put you in a world of hurt. As a warning, if you choose to scrape, be ready for the possibility of some real-time production support and fast deployments if a provider changes their UI. And they will.

With all that said, let’s take a look at how someone might scrape. For this example, we will be scraping the Day Ahead Market (DAM) Settlement Point Prices. For the DAM Settlement Prices, they should only need to be scraped daily. First thing is to understand the ERCOT information and how it’s presented. First, there are two views available – one is a web view and then there are csv and xml reports available. We are staying away from the web views (see above warnings) and we are going to concentrate on the csv and xml reporting views.

damscreen

Let’s start with the url. Here is the url for the ERCOT MIS reports app for DAM Settlement Point Prices.

http://mis.ercot.com/misapp/GetReports.do?reportTypeId=12331&reportTitle=DAM%20Settlement%20Point%20Prices&showHTMLView=&mimicKey

Here are the thoughts you should be having:

  1. Do I need all the parameters or just the reportTypeId? (Just the reportTypeId)
  2. Is the reportTypeId correlated to the DAM Settlement Point Prices? Will each type of report have its own integer based reportTypeId? And do those stay the same? (Not Sure about any of these)

So now we have a baseline. By looking at some of the other reports, you can quickly discover some fairly strong sources that each report type has a unique reportTypeId, and they stay the same. If I know the reportTypeId of the type of reports I want to scape, I can simply replace the url’s reportTypeId with the report id I want and theoretically, I should be able to pull any reports I want. Oh, wait. What is this? When I mouse-over the zip file for the xml, I get another url.

http://mis.ercot.com/misdownload/servlets/mirDownload?mimic_duns=&doclookupId=483750748

Here are the thoughts you should be having:

  1. Do I need all the parameters or just the docLookupId? (No) (By the way, the mimic_duns looks like something that, if you know the DUNS of an entity, might mock out their view)
  2. Is the docLookupId correlated to the particular day’s version and file type of the DAM Settlement Point Prices? (No) Will each report have its own integer based docLookupId? (Yes) And do those have any type of identifiable algorithm? (No) (this is not a great scenario)

Right now, there is a couple of options when you cannot programmatically discern the particular report you are looking for. You can:

  1. Scrape it all and backfill data gaps programmatically.
  2. Parse the HTML and find the nodes with the file names, parse the names finding the dates you want between the dots, and then find the zip link in the same html row.

I didn’t like either of these as they are both prone to the html change issue we described earlier. So I started poking around on the source of the list page and found something interesting.

ercotmishtml

You see that base href url? http://mis.ercot.com:80/misapp/ Well, I did. And I clicked it. And thank goodness I did because my life just got a ton easier.

icescreen

Looks like ICE has its own little version of the ERCOT MIS website. And if you dig in to the links, you get the first web service (http://mis.ercot.com/misapp/servlets/IceMktRepListWS) that provides us a list of the reportTypeId for each report. In this list, we find the ID for the report we are looking for.


<ns0:Report>
  <ns0:ReportTypeID>12331</ns0:ReportTypeID>
  <ns0:ReportName>DAM Settlement Point Prices</ns0:ReportName>
  <ns0:SecurityClassification>PUBLIC</ns0:SecurityClassification>
</ns0:Report>

Now, we can use these reportTypeIds in the other web service to give us a list of the docLookupIds with some supporting information. http://mis.ercot.com/misapp/servlets/IceMktDocListWS?reportTypeId=12331


<ns0:Document>
  <ns0:ExpiredDate>2015-07-11T23:59:59-05:00</ns0:ExpiredDate>
  <ns0:ILMStatus>EXT</ns0:ILMStatus>
  <ns0:SecurityStatus>P</ns0:SecurityStatus>
  <ns0:ContentSize>91581</ns0:ContentSize>
  <ns0:Extension>zip</ns0:Extension>
  <ns0:FileName/>
  <ns0:ReportTypeID>12331</ns0:ReportTypeID>
  <ns0:Prefix>cdr</ns0:Prefix>
  <ns0:FriendlyName>DAMSPNP4190_xml</ns0:FriendlyName>
  <ns0:ConstructedName>cdr.00012331.0000000000000000.20150610.131348757.DAMSPNP4190_xml.zip</ns0:ConstructedName>
  <ns0:DocID>483750748</ns0:DocID>
  <ns0:PublishDate>2015-06-10T13:13:47-05:00</ns0:PublishDate>
  <ns0:ReportName>DAM Settlement Point Prices</ns0:ReportName>
  <ns0:DUNS>0000000000000000</ns0:DUNS>
  <ns0:DocCount>0</ns0:DocCount>
</ns0:Document>

Nice. Two steps down, one to go. We can now parse the xml to identify the docID we need by date (PublishDate) and file type (IndexOf ‘xml’ on FriendlyName). And once we have the docId, we can then make the final call to retrieve the zip file using the previously known url:

http://mis.ercot.com/misdownload/servlets/mirDownload?doclookupId=483750748

Using these patterns and having the list of documents, it would be very easy to scrape the site based on dates, looking to fill in data or just parse data on a daily basis. Let’s review the steps.

  1. Get all the report types and their subsequent reportTypeId.
  2. Get all the docIDs for the reportTypeId by parsing the xml by date and/or file type.
  3. For each node, call the third url to pull in the zip file. Unzip. Parse. Save. Done.

With great power comes great responsibility! Be nice to the ERCOT folks and try to limit the scraping to only the data you need, in the lowest intervals you need and store the downloaded raw files in a document server (or amazon S3 bucket or azure blob storage) if you think you might need them again in the future (you will). Disk space is cheap. Also, write two different processes for pulling the data and then parsing it. That way, you can also have other processes pulling in the raw data.

Enjoy your scraping.

Identity Crisis

A few months ago, I was having a conversation over coffee with an IT operations manager with a very large company about their lack of standards in managing deployments and dependency on manual processes. As these conversations usually go, what started as a very targeted discussion about optimization of a single aspect of software development turned into a fantastic philosophical journey across the entire spectrum of issues, from everything to managing executive expectations and what software development will look like ten years from now. Keep in mind, this is a multinational company with hundreds of programmers, ops, and IT support employees managing millions of lines of both completely custom software packages but also millions of lines of code related to customization of vendor software packages with software budgets in the 100’s of millions. That’s when the statement was dropped:

“We aren’t a software company.”happygilmore

Much like Happy Gilmore would never admit he’s a golfer, but rather a hockey player, there are so many large companies out there having this identity crisis and refusing to acknowledge reality. If the product produced is not software, this does not exempt the operations from failing to adopt good processes and standards related to software. For example, if an employee of a widget manufacturing company was asked what the company did, the answer would be making widgets. In the same line, if you asked their CFO if the accounting department followed GAAP standards, the answer would certainly be “yes”. However, when you walk into the IT departments and ask the developers or software managers whether or not they do things like unit testing, regression testing or work towards automated security controls, you would be very surprised how often the answer is no. Why is there a perceived difference?

The main difference is visibility. Auditors don’t come in to make sure good software development standards are in place, but they do come in and pour through the ledgers to ensure accounting is happening according to standards. If standards aren’t in place for your accounting department, the pain can be immense, especially for public companies. However, the reality of following accounting standards is not simply because people will be checking. The standards are in place because of the collective experience in business of understanding that following standards is a good thing from a productivity standpoint. In the long term, standards create efficiencies and reduce costly errors. These same companies choose to implement rigid standards in one aspect of their business, yet leave other departments with hundreds of employees and millions of budgeted dollars free of standards.

Yet leaving the basic standards, like unit testing and automated builds, out of the software development processes will be visible, but just not in the same way. That production outage last week because of a last minute code change? That was because there weren’t any unit tests. That year long million dollar software upgrade that turned into a 3 million dollar two year effort? That’s because their weren’t any requirement standards, performance testing or architectural standards in place. The lack of standards in software development processes are very visible and are equally as painful as an accounting department not following GAAP, but the difference is the inability to draw the correct conclusions and root cause. The employees ultimately responsible will attribute the outage to the bad code or the poor requirements, but the answer is actually a lack of effective standards and best practices.

Just as GAAP standards won’t fix every accounting woe, great software development practices won’t catch every issue but it sure will stop a lot of them. So the next time there is a big production outage, ask a different set of questions:

  • Can you provide me a list of all the unit tests that were run for this?
  • Did the regression tests fail for this release?
  • When the rollback scripts were run, why weren’t they effective?

The answers might surprise you.

Choosing an Enterprise Trading Communication Protocol

In building modular software for energy trading organizations, each module is designed to perform a particular set of tasks, usually correlated to the offices: front, mid and back. The main purpose of creating modular, independent systems is the ability to target a particular set of features without having to account for everything upstream and downstream the IT value chain. Building software for exchange order management requires a different set of behaviors and characteristics than creating invoices for your residential power customers. Building the entire value chain of an energy trading shop into a single application may work for startups and limited scope, but will quickly decay in the face of  scale. To create scalable systems, modularity is required but with modularity comes the requirement of data interfaces.giff30eNVU7z7

Building data interfaces and APIs is an important part of building these types of software. For many different reasons, architects and developers involved will spin their wheels creating a new, completely proprietary way to describe the nouns of their system, such as trades or positions, instead of using one of the multiple, proven protocols already available for free. Usually the arguments against using the industry standard protocols internally are around perceived unique requirements or considering these fully baked protocols to be overkill. However, the long term implications of building a proprietary communications protocol will create an ongoing and expensive technical debt within the organization as the scope of the protocol expands and disagreements between system owners about the “right” way to model a particular interaction. It’s much easier to stand on the shoulders of industry giants and leverage years of trading experience by using proven and widely adopted protocols, while also creating an externally managed mediation point for disagreements regarding how and what gets communicated.

Right now, there are three main standards for modeling communication for energy trading. These standards have gotten quite a face lift in the past 4 years, due to their expansion related to changes created by Dodd-Frank and other regulatory legislation. These standards includes both format but also some standardization of content, allowing reuse of enumerations to describe certain trade values, such as settlement types or asset classes.

  • FIX / FIXML – Financial Information eXchange – This protocol is the most mature, as it was born from equities trading and other models that are purely financially settled. However in the recent years, the protocol has expanded into several different venues of commodities trading including being the protocol of choice for ICE’s Trade Capture and Order Routing platforms, as well as almost all of CME’s entire portfolio of electronic connectivity. This model is more normalized, in the sense of having multiple reference points to different related data. Instrument definitions could be communicated using different timing or venues while trades may just refer to those instruments using codes, allowing for faster communication.
  • FpML – Financial Product Markup Language – This protocol was recently used by DTCC to implement their SDR and while it’s much more robust in it’s implementation, it has quite a bit of a learning curve. The communications structure lends itself more toward each unit being a complete and total description of a transaction, duplicating standard data across transactions such as product or counterparty information. This protocol is XML based and much more verbose, but allows finer grained controls around things like date logic for instruments. The protocol also has multiple versions, tailored to specific needs of the organization.
  • CpML – Similar to FpML, this protocol is named directly for commodities descriptions and although it’s more widely adopted across the pond in Europe around EFET, it’s value holds for US based organizations.

But picking a standard protocol is only the first step. There are some additional concerns one should keep in mind when implementing these protocols to reduce the amount of headaches later.

  • Treat internal data consumers exactly the same way you would treat external data consumers – like customers.
  • Create a standards committee that has power to define and model the system interactions, but make sure the committee can make quick decisions through informal processes.
  • Always force any requests to extend the model for proprietary requirements through a rigorous standard to ensure those designing the interfaces aren’t simply doing the easy thing, rather than the right thing. I am always amazed about how quickly an organization can throw out the need for technical standardization when faced with a small decision to adjust an existing business process.
  • Broadcast the event generically, allowing listeners to determine what they need and allowing them to throw away what they don’t need. All else being equal, it’s easier to broadcast everything rather than open up individual data elements one by one.
  • Create and use common patterns for interfacing. Having one system be request-response and another be messaging based will create just as many issues as proprietary XML protocols.
  • As always, make the right thing to do the easy thing to do for developers. Make an investment in tooling and training for the stakeholders involved to ensure success.

Next Phase of SDR Reporting – Reconciliation

It’s now almost six months since the final deadline for compliance with CFTC Title VII Dodd Frank regulations requiring Swap Data Repository (SDR) reporting. With the proverbial dust settled and the holiday season behind, commodities trading compliance groups across the country are engaging in the next phase of Dodd-Frank implementations, including the upcoming position limits rules. However, some market participants are finding new requirements for already implemented rules springing to the forefront of their backlogs.

To understand the new requirements, it is necessary to understand the landscape in which CFTC Title VII Part 43 (Real-Time Public Reporting) and Part 45 (Swap Data Recordkeeping and Reporting Requirements) were implemented, in respect to commodities. There was also quite a bit of volatility to the requirements themselves, as many different legal interpretations regarding the exact meaning of phases such as “swap” were left in limbo.  In what is arguably the most complicating implementation decision, the data standards for Part 45 included a rule permitting an SDR to allow those reporting data to it to use any data standard acceptable to the SDR. The effects of this were felt almost immediately as market participants could not begin work on delivery of the data until the SDRs committed to a format. While there were some basic guidelines presented in the rule appendices, the reality of the implementation ended up being much more complicated an endeavor.

The SDR vendor situation also played a huge role in the current situation. The first SDR approved for commodities was ICE’s SDR, Trade Vault. However, it was only provisionally approved in late June 2012. At that point, Part 45 rules included a mid-January 2013 deadline for reporting of “other” derivatives classes, which included commodities. It was seven months before the deadline and market participants participating in commodities trading only had one SDR available – in Beta. The only other SDR vendor on the horizon for commodities was the DTCC Data Repository (DDR), but their request to operate for commodities derivatives was still pending. What was interesting was the fact DDR was already operating in the credit, equity, interest rate and foreign exchange derivatives markets and due to their alignments on Wall Street with those existing swap dealers in other lines of business; DTCC had an overwhelming slice of the market share on SDR reporting for those asset classes.

However, the commodities market participants responded enthusiastically to ICE’s Trade Vault based on several compelling system features. First, the product’s interface and workflows piggybacked directly on its very popular electronic confirmation system, eConfirm. By leveraging the same interface, many commodities market participants saw a huge advantage by limiting the DFA Part 43 and 45 implementations to supplementing already existing confirmation processes with the new DFA-prescribed data fields, such as execution timestamp. In addition to the leveraging of the interfaces, ICE created some really valuable services for its customer, both by documenting which instrument types were reportable under the law, but also allowing a single message submission to be able to comply with both real-time and PET reporting requirements. DTCC’s approval for DDR to handle commodities in the U.S. didn’t come until late 2012, which was too late for most of the market participants – except the ones who already were participating in DDR in other asset classes.

With the CFTC pushing deadlines out until mid 2013, the market had some time to catch up and finish their implementations, but the delay also created some interesting data fragmentation issues.  The Non-SD/Non-MSP market participants, which are sometimes referred to as “end-users”, largely ended up using ICE’s Trade Vault as their SDR. However, a large swath of swap dealers ended up with DTCC’s Data Repository and it’s not hard to see why. These commodities swap dealers were mostly big banks or very large financial institutions engaged in multi-asset class derivatives dealing in addition to many other services, most of which communicated with DTCC for other services, such as securities clearing and settlements. The larger financial institutions were also already with the communication protocol used by DDR, the Financial Products Markup Language (FpML), which was used extensively in other asset classes. This split creates an interesting set of issues for the market because of another subtle nuance to the DFA rules regarding reporting – the participant who is responsible for reporting gets to pick the destination SDR. For a market participant using ICE’s Trade Vault as it’s only SDR, executing a swap on an electronic platform (SEF or DCM) or with a swap dealer using DDR creates a situation where data reported is not “seen” by the end-user’s systems. When a third SDR is added to the market in the form of CME’s Repository Service, the reporting situation becomes even more complicated and difficult.

As market participants enter the New Year, they will be faced with prioritization of a new set of reconciliation burdens from SDR reporting. While position limits and other DFA rules may take a front seat, the entire commodities market will be left with very few market participants able to verify and reconcile the data being reporting on its behalf to the CFTC. Until the SDR data harmonization rules are enforced regulatory reporting requirements will continue to stay volatile, complex and costly due to this data fragmentation issue.

CFTC Will Hold an Open Meeting to Consider Proposals on Position Limits

20131104-205319.jpg

CFTC will be holding an open meeting to discuss and consider proposals on the position limits rules associated with the Dodd Frank Act. To listen in on the entire proceedings, you can either call in or subscribe to their webcast. Details are here.

The biggest part of the legislation pertains to the spot month limits. Per the FAQ:

“Spot-month position limits will be set at 25% of deliverable supply for a given commodity, with a conditional spot- month limit of five times that amount for entities with positions exclusively in cash-settled contracts.
Non-spot-month position limits (aggregate single-month and all-months-combined limits that would apply across classes, as well as single-month and all-months-combined position limits separately for futures and swaps) will be set for each referenced contract at 10 percent of open interest in that contract up to the first 25,000 contracts, and 2.5 percent thereafter.”

In addition to the limits themselves, there is also extensive rules around position aggregations, trying to reduce the loopholes created by spreading position to different legal entities with common ownership.

At the end of the day, the CFTC is trying to remove rampant speculation causing additional volatility in the near month markets. However, the commission may be going too far, reducing the benefits of speculative positions as a way for physical market participants to offload price risk. Let’s hope their findings create a healthy market response and increase liquidity.

Dodd Frank Implementation – Great Case for Agile Development

The Dodd-Frank Act (DFA) has been a major disruptor in 2012, especially in the energy industry. For those of you who are not familiar, DFA has created a set of pretty extensive external reporting requirements, both for trade life cycle events but also for data aggregations. For some organizations who are Swap Dealers, these reporting requirements are a major burden, requiring external reporting of trade life cycle events, such as execution and confirmation, in time frames as short as 15 minutes. In the financial services world, these reporting burdens are not as big of a leap, as the infrastructure and generally accepted practices of external, real time communication to facilitate straight through processing is prevalent. The energy and commodities world is not on the same level of sophistication, both because commodities trading tends to require much more bespoke and complex deal modeling, but also simply because they never needed to report events externally real-time.

In addition to these requirements, there has been volatility in the rules. Certain rules, such as Position Reporting, have been vacated (for now) leaving many projects in mid-flight. Other rules, such as Swap Data Repository reporting (Part 45), had data interfaces and workflow definitions offloaded on multiple vendors (p 2139), resulted in a very fragmented ecosystem where many-to-many data mappings and formats were required for different asset classes. Additionally, SDRs are implementing their systems during these different rule changes and clarifications, resulting in a fairly unstable integration task. This type of work is perfect for agile development.

  • Short sprints would allow you to push out functionality in short time frames, giving the team a natural checkpoint to make sure the functionality was still in-line with the latest legal opinion or CFTC change (Physical Options, anyone?). Every two weeks, the team can stop, demo the functionality they have built and receive feedback for the next sprint. Volatile requirements require a tight, frequent feedback loop. If you are building a huge technical spec document for a DFA implementation, you are toast.
  • Code demos naturally push end-to-end testing, giving the users a look at the real implementation rather than waiting until the last minute. The users can make adjustments at an earlier stage, reducing project risk and increasing customer satisfaction.

I would highly encourage all the companies who haven’t started your DFA efforts to look to agile to manage the project. Your developers and your users will thank you for it.

Energy Trading and Risk Management: It’s Time for STP

Originally published on Derivsource, an online community and information source for professionals active in derivatives processing, technology and related services.: http://www.derivsource.com/content/energy-trading-and-risk-management-it’s-time-stp#

 

The technology involved with energy trading and risk management is undergoing rapid and sometimes volatile changes, creating opportunities for companies to develop a distinct competitive advantage. Many forces, such as reduced profit margin on trading activities and increasingly complex regulatory requirements, are at work pushing companies to automate the transaction lifecycle to help increase profitability and reduce costs. Straight-through processing (STP) is the ability to have transaction data flow through a company’s different systems with little to no direct human intervention.  STP infrastructure has been adopted throughout the financial services industry with great success, but has yet to be widely utilized in the energy industry.

New regulatory reporting requirements such as those in the Dodd-Frank Act (DFA) are providing a foundation to push the energy industry toward STP. In particular, for swap data recordkeeping and reporting compliance, a transaction’s data may need to be sent to a swap data repository (SDR) within 15 minutes by year two. These timeframes can create data entry scenarios where there is simply not enough time for additional manual intervention or error correction. In addition, some SDRs are choosing to bundle their confirmation services along with DFA reporting causing confirmation processes to begin immediately, something many energy traders may not be used to because common practice is for end-of-day transaction reconciliation.

How could STP benefit an organization engaged in trading activities? Automating the flow of trade lifecycle data shortens processing time by reducing manual intervention. Additionally, these activities reduce operational risk and costs by reducing errors resulting from manual data entry mistakes. This can in turn improve decision making by integrating real-time data for stronger decision support analysis. Consequently, STP is a very powerful tool to increase the profitability of energy organizations’ trading activities.

For most energy trading organizations, there are many different touch points for automating the flow of trade data, including:

  • Connectivity to designated contract markets (DCMs) and swap execution facilities (SEFs) for the purpose of trade capture
  • Internal connectivity between different systems handling areas such as credit, scheduling, transportation and risk
  • Connectivity to external systems, including SDRs, market operators and even directly with counterparties to facilitate regulatory reporting, trade confirmation and other activities such as physical and financial settlements
  • Connectivity with transmission providers such as pipelines and independent system operators (ISO) for automation of scheduling

Ultimately, the concept is simple; when a trade lifecycle event occurs, energy companies should create a representation of that event using a common communication specification and broadcast the event to allow other systems to process the event independently. The complexity lies in formulating the rules, structure and definitions of the data and events being broadcast. Adapting these procedures and systems to STP will reduce dependence on manual entry processes, assure more reliable data, and improve speed of critical decision, risk and performance information and analysis.

With the tightening profit margins on trading activities and increased transaction collateral requirements, it is up to energy trading organizations to find new and inventive ways to reduce operational costs and increase efficiency. With the right foundation of a common communication protocol and long-term strategic vision for a company’s enterprise, the possibilities seem endless.

 

Getting started with FpML: Understanding the Schemas

I have been engaged in project work for Dodd-Frank for almost 18 months now. With Dodd-Frank comes a new set of external interfaces for companies to be able to communicate with depending on the DFA requirements. Some of the interfaces are proprietary, but some of them are not. Two of those interfaces (Large Trader Reporting, SDR reporting for one vendor) utilize FpML as communication protocol. If you’ve started working with FpML, the first thing you will notice is how robust the schema is. It gives you the power to create trade lifecycle events using XML in a very detailed manner. The next thing you will notice is that it is huge and can be overwhelming to get started. Here’s a quick list of things that helped me understand the organization better.

  • There are 4 sets of schemas: confirmation, recordkeeping, reporting and transparency. Each of them are designed to be used on it’s own but share common objects and underlying definitions. This is the most confusing part of it. Simply understand that although the Account object is exactly the same, it is duplicated in 4 sets of schemas, where all the differences (that matter*) are the values in the root node of the schemas, such as ecore:package=”org.fpml.confirmation”. 
  • Think of the structures in two separate ways. First, there are nouns (Product Model Framework: http://www.fpml.org/documents/FpML5-products-framework.pdf). A trade. A confirmation. Second, think about how these nouns are passed around (Messaging Model Framework: http://www.fpml.org/documents/FpML5-messaging-framework.pdf). You want NonpublicExecutionReport to be sent out for SDR reporting, but it could also be used for incoming trade capture. Go and download the specs – http://www.fpml.org/spec/latest.php – and look at the examples. There are tons of them. Don’t get overwhelmed! They are verbose but just think of it as flexibility, not extra work.
  • What are the values supposed to be?? There is a whole set of codes associated with FpML elements. Let’s look at an example. If you look at the recordkeeping section, in the examples there is a record-ex100-new-trade.xml file. Within that file is a nonpublicExecutionReport (think trade execution) there are the details of the swap involved. Within the swap, there is an element called “productType”, and that element has a productTypeScheme=”http://www.fpml.org/coding-scheme/product-taxonomy”. If you just cut and paste the url into your browser, you will see a whole list of ISDA values for this item which I thought was pretty cool. So then I cut the url down to see what else might be out there and I wasn’t disappointed. Go take a look for yourself: http://www.fpml.org/coding-scheme/ Don’t reinvent the wheel! Use the codes already provided and if something doesn’t fit, then make it fit! I walk into so many trading organizations where I see that some analyst just didn’t realize this data was out there (guilty as charged) and started creating their own taxonomy. Stop doing that!

Don’t be afraid. Jump in and start creating. Take a trade representation from your company and see if you can create a NonpublicExecutionReport out of it. It’ll be painful at first but may give more insight as to how your company may be able to take advantage of industry standards in a more efficient manner.

* that matter meaning some schema attributes which are defaults are explicitly set in one schema, but not the other, such as minoccurs=0.

Legal Entities: CFTC Interim Compliant Identifier

With the passage of Part 45 rules of Dodd-Frank, the CFTC has authorized DTCC and SWIFT to build and maintain an application designed to hold a centralized list of CFTC Interim Compliant Identifier, which is a precursor to the Legal Entity Identifier as described in the rule. Check out the application here: https://www.ciciutility.org

It contains the basic information around a legal entity, including legal names, the cici code, and the legal address of the corporation. These codes are designed to reduce the complexity of identifying a legal entity involved in trading. These codes are going to be used throughout the reporting including the large trader reports for swap dealers, but also the SDRs. One of the SDR’s, DTCC’s Global Trade Repository, is going to be using these identifiers for collection of swap trade data but ultimately, all of the SDR’s will be required to translate their internal representations to these new identifiers for the CFTC.

While this is a great first step, the potential for something greater exists. Legal entities change so many things all the time including billing addresses, shipping addresses, contact information, and even legal entity names or aliases. Wouldn’t it be nice to have a single place for counterparties to maintain their own information rather than having to update it in thousands of client systems?

 

 

Hart Energy Breakfast Club – Integrating U.S. Shale Oil

Update: Link to first presentation slideshow

On the first Thursday of every month, Hart Energy hosts a breakfast involving great networking opportunities for those of us in the energy industry in addition to very informative presentations about a different aspect of energy, leaning heavily on the upstream side, particularly in E&P.

This month’s presentation was on US. Shale Oil. Here are the key takeaways:

  • There continues to be tremendous growth of US and NA oil production and doesn’t look like things are going to peak until 2016 in terms of volumes. This new production is mainly in the mid-continent, particularly in the Bakken and Eagle Ford formations.
  • This new production is creating increased demand on transport solutions. Pipelines cannot be built to all of the remote locations quickly enough and rail transportation seems to be targeted in solving the problems.
  • There will be hundreds of new rail facilities throughout the mid-continent to help with the logistics of moving this crude, especially the Bakken crude known to have very good refining margins, to many regions, especially PADD 1, which needs the light, sweet crude for its aging infrastructure which is not suitable to handle the heavy, sour stuff.
  • With new rail receipt points popping up everywhere in addition to reversals of pipelines (Seaway, etc) away from Cushing and toward the gulf coast, this transportation development is also creating a strong flow of crude south towards the Gulf coast. However, this will eventually create issues as the refinery capacity at the coast will be fully satisfied, the crude will need a place to go.
  • The WTI/LLS spread will continue to be in play and a great transportation arbitrage opportunity.

With energy policy being such a hot topic in the election, the questions still come up regarding if the US will ever be able to be truly energy independent. If the production numbers continue to develop as analysts estimate, the production could be enough to satisfy the demand of domestic refineries. The question really is: Will there be enough transportation to make domestic crude even possible in places like PADD 1 and PADD 5? It’s not just about price or location, it’s also about being able to get there.

 

Older posts

© 2017 Stephen Nimmo

Theme by Anders NorenUp ↑