Stephen Nimmo

Energy Trading, Risk Management and Software Development

Month: October 2012

Generate Java objects for FpML using JAXB and Maven: The Easy Way!

First, go get the schemas from http://www.fpml.org/spec/latest.php. You will have to register. When you log in, make sure your have Specifications radio selected.

We are going to work with the latest version. Download the zip file for whichever group of FpML you want.

Once that file is downloaded, fire up eclipse and create a generic maven project.

For generating the JAXB objects, we are going to use a maven plugin from Codehaus. Documentation for the plugin can be found at http://mojo.codehaus.org/jaxb2-maven-plugin/index.html. Notice that I put the url for the plugin into a comment at the top of the plugin.

To use the plugins out of the box (mostly), let’s start by creating a directory for the xsds. Because there are four sets of xsds for FpML, I organize them by creating the normal src/main/xsd folder and then unzip the contents of the FpML download into the folder. It should unzip to a directory for whatever download you selected (xml_confirmation).

I like making my objects serializable for JAXB, so I add an xjb binding. Create the folder src/main/xjb and add a jaxb-bindings.xml file in there with:

[xml]
<?xml version="1.0" encoding="UTF-8"?>
<bindings xmlns="http://java.sun.com/xml/ns/jaxb"
xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance"
xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"
xsi:schemaLocation="http://java.sun.com/xml/ns/jaxb http://java.sun.com/xml/ns/jaxb/bindingschema_2_0.xsd" version="2.1">
<globalBindings>
<serializable uid="54" />
</globalBindings>
</bindings>
[/xml]
Next we are going to add the plugin. Again, because FpML could be four different downloads and if you want to generate objects for all 4 flavors, we will want to use the “Multiple schemas with different configuration” which has documentation at http://mojo.codehaus.org/jaxb2-maven-plugin/usage.html. For our purposes, we will implement like this.

[xml]
<plugin>
<!– http://mojo.codehaus.org/jaxb2-maven-plugin/index.html –>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jaxb2-maven-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<id>xml_confirmation-xjc</id>
<goals>
<goal>xjc</goal>
</goals>
<configuration>
<schemaDirectory>${project.basedir}/src/main/xsd/xml_confirmation</schemaDirectory>
<packageName>org.fpml.confirmation</packageName>
<staleFile>${project.build.directory}/jaxb2/.confirmationXjcStaleFlag</staleFile>
</configuration>
</execution>
<execution>
<id>xml_recordkeeping-xjc</id>
<goals>
<goal>xjc</goal>
</goals>
<configuration>
<schemaDirectory>${project.basedir}/src/main/xsd/xml_recordkeeping</schemaDirectory>
<packageName>org.fpml.recordkeeping</packageName>
<staleFile>${project.build.directory}/jaxb2/.recordkeepingXjcStaleFlag</staleFile>
</configuration>
</execution>
<execution>
<id>xml_reporting-xjc</id>
<goals>
<goal>xjc</goal>
</goals>
<configuration>
<schemaDirectory>${project.basedir}/src/main/xsd/xml_reporting</schemaDirectory>
<packageName>org.fpml.reporting</packageName>
<staleFile>${project.build.directory}/jaxb2/.reportingXjcStaleFlag</staleFile>
</configuration>
</execution>
<execution>
<id>xml_transparency-xjc</id>
<goals>
<goal>xjc</goal>
</goals>
<configuration>
<schemaDirectory>${project.basedir}/src/main/xsd/xml_transparency</schemaDirectory>
<packageName>org.fpml.transparency</packageName>
<staleFile>${project.build.directory}/jaxb2/.transparencyXjcStaleFlag</staleFile>
</configuration>
</execution>
</executions>
<configuration>
<outputDirectory>${project.basedir}/src/main/java</outputDirectory>
</configuration>
</plugin>
[/xml]

Once everything is there, simply run the generate-sources target with Maven (I use the IDE because I am lazy) and it will generate your entire object model. Start coding….

But wait! There’s more. One last trick. The JAXB generation does not create and @XmlRootElement annotations on ANY of the objects. Why? Travel back to 2006 and have the answer. http://weblogs.java.net/blog/2006/03/03/why-does-jaxb-put-xmlrootelement-sometimes-not-always

So how do we marshall? Use the ObjectFactory. Here’s some example code which took me way too long to figure out.

[java]
JAXBContext jc = JAXBContext.newInstance("org.fpml.recordkeeping");
ObjectFactory objectFactory = new ObjectFactory();
Marshaller marshaller = jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);

NonpublicExecutionReport report = objectFactory.createNonpublicExecutionReport();
RequestMessageHeader header = new RequestMessageHeader();
MessageId messageId = objectFactory.createMessageId();
messageId.setValue(UUID.randomUUID().toString());
header.setMessageId(messageId);
report.getHeader().getMessageId().setValue(UUID.randomUUID().toString());

//Use the ObjectFactory to create the JAXB<Object> wrapped!!! This is the key!
marshaller.marshal(objectFactory.createNonpublicExecutionReport(report), System.out);
[/java]
Alright, get started.

Getting started with FpML: Understanding the Schemas

I have been engaged in project work for Dodd-Frank for almost 18 months now. With Dodd-Frank comes a new set of external interfaces for companies to be able to communicate with depending on the DFA requirements. Some of the interfaces are proprietary, but some of them are not. Two of those interfaces (Large Trader Reporting, SDR reporting for one vendor) utilize FpML as communication protocol. If you’ve started working with FpML, the first thing you will notice is how robust the schema is. It gives you the power to create trade lifecycle events using XML in a very detailed manner. The next thing you will notice is that it is huge and can be overwhelming to get started. Here’s a quick list of things that helped me understand the organization better.

  • There are 4 sets of schemas: confirmation, recordkeeping, reporting and transparency. Each of them are designed to be used on it’s own but share common objects and underlying definitions. This is the most confusing part of it. Simply understand that although the Account object is exactly the same, it is duplicated in 4 sets of schemas, where all the differences (that matter*) are the values in the root node of the schemas, such as ecore:package=”org.fpml.confirmation”. 
  • Think of the structures in two separate ways. First, there are nouns (Product Model Framework: http://www.fpml.org/documents/FpML5-products-framework.pdf). A trade. A confirmation. Second, think about how these nouns are passed around (Messaging Model Framework: http://www.fpml.org/documents/FpML5-messaging-framework.pdf). You want NonpublicExecutionReport to be sent out for SDR reporting, but it could also be used for incoming trade capture. Go and download the specs – http://www.fpml.org/spec/latest.php – and look at the examples. There are tons of them. Don’t get overwhelmed! They are verbose but just think of it as flexibility, not extra work.
  • What are the values supposed to be?? There is a whole set of codes associated with FpML elements. Let’s look at an example. If you look at the recordkeeping section, in the examples there is a record-ex100-new-trade.xml file. Within that file is a nonpublicExecutionReport (think trade execution) there are the details of the swap involved. Within the swap, there is an element called “productType”, and that element has a productTypeScheme=”http://www.fpml.org/coding-scheme/product-taxonomy”. If you just cut and paste the url into your browser, you will see a whole list of ISDA values for this item which I thought was pretty cool. So then I cut the url down to see what else might be out there and I wasn’t disappointed. Go take a look for yourself: http://www.fpml.org/coding-scheme/ Don’t reinvent the wheel! Use the codes already provided and if something doesn’t fit, then make it fit! I walk into so many trading organizations where I see that some analyst just didn’t realize this data was out there (guilty as charged) and started creating their own taxonomy. Stop doing that!

Don’t be afraid. Jump in and start creating. Take a trade representation from your company and see if you can create a NonpublicExecutionReport out of it. It’ll be painful at first but may give more insight as to how your company may be able to take advantage of industry standards in a more efficient manner.

* that matter meaning some schema attributes which are defaults are explicitly set in one schema, but not the other, such as minoccurs=0.

Legal Entities: CFTC Interim Compliant Identifier

With the passage of Part 45 rules of Dodd-Frank, the CFTC has authorized DTCC and SWIFT to build and maintain an application designed to hold a centralized list of CFTC Interim Compliant Identifier, which is a precursor to the Legal Entity Identifier as described in the rule. Check out the application here: https://www.ciciutility.org

It contains the basic information around a legal entity, including legal names, the cici code, and the legal address of the corporation. These codes are designed to reduce the complexity of identifying a legal entity involved in trading. These codes are going to be used throughout the reporting including the large trader reports for swap dealers, but also the SDRs. One of the SDR’s, DTCC’s Global Trade Repository, is going to be using these identifiers for collection of swap trade data but ultimately, all of the SDR’s will be required to translate their internal representations to these new identifiers for the CFTC.

While this is a great first step, the potential for something greater exists. Legal entities change so many things all the time including billing addresses, shipping addresses, contact information, and even legal entity names or aliases. Wouldn’t it be nice to have a single place for counterparties to maintain their own information rather than having to update it in thousands of client systems?

 

 

Deployment Environments and Support: How many servers do you need?

This discussion came up today so I thought it would be a good topic. More times than not, most of my clients fail to prepare for production deployments properly. They procure all the servers, switches and hardware. They check their checklists and prepare the support documentation. They get ready for the big weekend and everyone is nervous and excited. Go Live. It’s a glorious day. The next few weeks are the honeymoon period. Most of the development staff is busy fixing things not caught in the previous testing efforts and management usually has a moratorium in place for any new development, not only to give the developers a break, but also to ensure system stability. Then the first request comes in for an enhancement. Honeymoon over and this is where the pain starts.

First, let’s talk about environments. There are 5 common environments:

  • LOCAL – everything running on your desktop or laptop. Soup to nuts. You are typically working on your stuff with little regard for others.
  • DEV- this is the first staging environment where multiple developers’ work commingles.
  • TEST- environment where the QA team gets a hold of your code and tries to break it.
  • UAT – environment where users get a hold of your code and either try to break it or tell you they want changes (yes, they should have demoed in DEV or even local)
  • PROD- the big time.

There is a book to write about testing and priorities in these environments, but for now we will stick with production deployments. Starting with the first release, it’s fairly simple. Take version 1.0. Deploy to DEV, then to TEST, then to UAT and then to PROD. Easy. During development, if there was a change, you would make the change and redeploy starting at DEV.

So let’s look at the environments after the 1.0 deployment.

  • LOCAL – 1.0
  • DEV – 1.0
  • TEST – 1.0
  • UAT – 1.0
  • PROD – 1.0

SUPPORT ISSUE! They found a bug, It’s critical and they need a patch tonight. Fine, make the change locally. Build and deploy to DEV. Test. Build and deploy to TEST, run automated regressions. Build to UAT, user checks and signs off and boom. Move to PROD. It’s all simple until you start working on stuff.

So now your team is halfway through 1.1 development. Take a look at the environments.

  • LOCAL – 1.1
  • DEV – 1.1
  • TEST – 1.0
  • UAT – 1.0
  • PROD – 1.0

SUPPORT ISSUE! It’s a bug, it’s priority one. Patch tonight. Fine. I’ll just….wait. I have to switch codebases over to the 1.0 version. And we will need to convert the DEV servers back to 1.0. Oh, and the database changes. Oh, and the server environment scripts….headache begins. Take an even more complicated environment where a dev team delivered a 1.3 build to the QA team a week ago.

  • LOCAL – 1.4
  • DEV – 1.4
  • TEST – 1.3
  • UAT – 1.2
  • PROD – 1.2

Now, a prod support issue is rolling back multiple versions of multiple environments. Nasty.

How do we get around this? Multiple servers and/or virtualization.

For every non-local environment, you will need scaled environments down the chain. Using the above example, here’s what is needed.

DEV TEST UAT PROD
dev.prod test.prod uat.prod prod
dev.uat test.uat uat
dev.test test
dev

As you can see, we went from 4 environments to 10 environments. But it cleans up the issues as the development team would be able to deploy bug fixes to the appropriate environments and not have any environment reversion. Let’s take a look at that nasty scenario above. EMERGENCY BUG FIX for 1.2. Once the code is complete, push to dev.prod, test.prod, uat.prod – get signoff – then prod. Once this is complete (or during the process if your deployment system is quick), you can go back and deploy to the other environments (dev.uat, test.uat, uat).

DEV TEST UAT PROD
dev.prod (1.2) test.prod (1.2) uat.prod (1.2) prod (1.2)
dev.uat (1.2) test.uat (1.2) uat (1.2)
dev.test (1.3) test (1.3)
dev (1.4)

What does this do?

  • Keeps the rest of your development team working on 1.4 and fixing issues with 1.3.
  • Keeps your QA team pounding on 1.3.
  • Keeps your UAT moving
  • Provides a secondary UAT environment for any other issues that may need to be modeled.

Essentially, it keeps everyone working.

So what do you need?

  • Pushbutton deployments processes from a central build server. NO IDE deployments. (AntHill, Cruisecontrol, Bash scripts/ANT/Maven, etc)
  • Servers. 10 of them. At this point, it must be recognized for bug fixes, we aren’t really doing too much performance testing normally so our servers don’t need to be too meaty. Buy a single (big) box for DEV and virtualize all the dev.* servers on it. Need to run performance testing? Stop the other servers and reallocate to a single server. Switch back when done.
  • SOLID communication mechanisms for the entire team – Developers, testers, users. Developers need to know what branches need to be merged back, what branches are built to what servers and provide detailed deployment stages for your users so they aren’t calling you asking where their fix is.
  • Build-integrated unit testing. Test on all builds.
  • Automated regression testing. Test after every deployment.
  • A deployment manager with OCD issues to make sure everyone is following the rules.

Hart Energy Breakfast Club – Integrating U.S. Shale Oil

Update: Link to first presentation slideshow

On the first Thursday of every month, Hart Energy hosts a breakfast involving great networking opportunities for those of us in the energy industry in addition to very informative presentations about a different aspect of energy, leaning heavily on the upstream side, particularly in E&P.

This month’s presentation was on US. Shale Oil. Here are the key takeaways:

  • There continues to be tremendous growth of US and NA oil production and doesn’t look like things are going to peak until 2016 in terms of volumes. This new production is mainly in the mid-continent, particularly in the Bakken and Eagle Ford formations.
  • This new production is creating increased demand on transport solutions. Pipelines cannot be built to all of the remote locations quickly enough and rail transportation seems to be targeted in solving the problems.
  • There will be hundreds of new rail facilities throughout the mid-continent to help with the logistics of moving this crude, especially the Bakken crude known to have very good refining margins, to many regions, especially PADD 1, which needs the light, sweet crude for its aging infrastructure which is not suitable to handle the heavy, sour stuff.
  • With new rail receipt points popping up everywhere in addition to reversals of pipelines (Seaway, etc) away from Cushing and toward the gulf coast, this transportation development is also creating a strong flow of crude south towards the Gulf coast. However, this will eventually create issues as the refinery capacity at the coast will be fully satisfied, the crude will need a place to go.
  • The WTI/LLS spread will continue to be in play and a great transportation arbitrage opportunity.

With energy policy being such a hot topic in the election, the questions still come up regarding if the US will ever be able to be truly energy independent. If the production numbers continue to develop as analysts estimate, the production could be enough to satisfy the demand of domestic refineries. The question really is: Will there be enough transportation to make domestic crude even possible in places like PADD 1 and PADD 5? It’s not just about price or location, it’s also about being able to get there.

 

© 2017 Stephen Nimmo

Theme by Anders NorenUp ↑