Stephen Nimmo

Energy Trading, Risk Management and Software Development

Month: December 2012

Are you ready for the cloud?

The “Cloud” is one of the most overused and misunderstood buzzwords in technology today. For most people who have watched too many Microsoft commercials touting their cloud integrated operating systems, it’s this abstract resource in the sky that everything magically connects to and things happen. Most people do not understand that it’s simply an external network of computation resources (computers). Just like you have your server at work with all your spreadsheets on it (the ubiquitous S: drive or something), the public cloud is simply a set of computers located at a data center that you have access to.

But this is where the true cloud conversation starts, and there are so many other aspects to the cloud people do not understand. To understand cloud computing, especially public cloud such as Amazon EC2, you must first understand virtualization. To understand virtualization, you must need to understand operating systems and how they work with your computer. Essentially, a computer is made up of a bunch of different components (hard drive, memory, network card) that talk to each other through a common physical pathway(motherboard). But there is another pathway required for these communications and that is the operating system (OS). Each component, such as the hard drive, only knows about itself and has an interface in which it allows software to interact with it via the physical connection, and this software is typically the OS. When most people buy a computer, it has a single OS on it that single handedly controls all the hardware interactions.

Virtualization simply adds an additional layer to allow multiple OS access to the same hardware. Instead of having one OS running on a computer, you can have two or more running as a virtual machine (VM). The virtualization software acts as a traffic cop, taking instructions and other interactions from each OS and ensuring they all get access to the hardware devices they need, but also make sure that each of the devices get the needed information back to the correct OS. There are lots of examples of virtualization software, most notably VMWare and Virtualbox, that will allow users to run multiple OS on their machines. You can run an instance of Ubuntu on Windows 7. There are huge benefits to be gained from this, but when it comes to public cloud computing, the main benefit is shared hardware and abstraction of computing resources away from physical hardware.

Once you understand how virtualization works, it’s not a big leap to realize the public cloud is simply an automated virtualized environment allowing users to create new VMs on demand. The user doesn’t have to worry about having enough hardware available or even where that VM is located. They simply create an instance of a VM by specifying attributes of how they want the machine configured, such as processor speed or memory capacity, and don’t care about where or how. The public cloud is simply a manifestation of a concept that has been maturing for quite a while – turning computation resources into a homogeneous commodity rather than a specialized product.

This is the point to where light bulbs start turning on above the heads of executives. They start looking at these opportunities to use generic, commoditized computing resources and remove the risks and costs associated with maintaining and managing data centers. All of the hardware that sits unused for weeks at a time because it’s only needed for special cases, like emergency production support releases, can be sunset. We can build performance testing environments in a single day and then delete the entire thing by the time we go home. The possibilities are endless.

But let’s be clear about something. There is something about public cloud infrastructure that makes it special. It’s not the virtualization software. It’s not the hardware used. It’s the people.

Public clouds like Amazon EC2 have the best and brightest operations engineers on the planet, creating automated processes behind the scenes so that users like us just need to click a button. It’s not easy. Their environment is a hurricane of requests and issues that most people can’t dream of. They manage a half-a-million linux servers. 500,000 servers. Most people ask how they do it and the answer is simple: they automate everything they can. Implementing that simple answer is where most people start to run into issues. Luckily, Amazon hires the best ops people in the world and probably pays them a small fortune to do this, both of which is simply not available to most businesses. Public cloud is about standing on the shoulders of the best operations talent in the world and taking advantage of their automation procedures.

Remember those light bulbs I referred to earlier above the heads of executives? Let’s take a sledgehammer to them with a single statement: All your data that you put into the public cloud is on the shared hardware. Your CRM data that you have in your cloud database? It could be sitting on the same drive and the same platter as your competitor. Your super-secret computation process could be sharing processing time with Reddit on the same CPU. The public cloud means we all share the same hardware. While there are quite a few security measures to make sure the virtualized segregation stays in place, we all know what typically happens – all secure data is deemed not cloud worthy and our virtualized hopes and dreams die with a whimper.

Until someone says the words “Private Cloud”.

Here’s the problem with “Private Cloud”. The public clouds are good because of the people and the culture. It’s not the tools. In fact, I would be willing to bet that 99% of the tools used by Amazon could easily be purchased or found via open source. Most organizations simply don’t have the resources, the processes, or the intestinal fortitude to engage in such an endeavor. You need three things to be successful: a culture of automation, talented hybrid operations and developers willing to blur the lines between coding and deployment and tools to automate. You can buy the tools. You can buy the people. You can’t buy culture.

Let’s get past the limitations I stated earlier and create a hypothetical company with it’s own private cloud. Unless your software is built for the cloud, your not really buying anything. When I say built for the cloud, imagine being able to scale an application horizontally (think web application) by adding new web servers on demand. Here’s a basic list of how to create a VM from scratch and take it into a production cluster:

  1. Create a new virtual machine.
  2. Install the correct image/OS.
  3. Automatically configure the VM to the specifications needed for the application (install Java, run patches, etc).
  4. Install any containers (JBoss, Tomcat, whatever) and configure them automatically (JMS, DB Connections, Clustering with IP addresses, etc).
  5. Deploy an EAR or WAR from your internal build repository and have it automatically join any existing cluster seamlessly.
  6. Configure the network and other appliances to recognize and route requests to the new server automatically.
  7. Do everything just stated in reverse.

Unless you can do something like this automatically, then you aren’t cloud. You are virtualized. Not that simply being a company that utilizes virtualization isn’t a great first step. There are many uses of virtualization that could provide many benefits to a company quickly, such as having virtualized QA environments that can quickly be initialized to run emergency production support releases. Virtualization is a great thing by itself, but virtualization is not cloud.

Secondly, the fourth and fifth bullet points are where most software shops get caught up. Their applications are simply not built for it. When the architects laid the groundwork for the application, they didn’t think about having the ability to quickly add new instances of the application to the cluster such as not using DNS to handle routing requests. There are some decisions that are made or even not even addressed that are core to an application that can’t handle these types of environments. It’s a square peg in a round hole and while some applications can be retrofit to handle the automation, some will need to be rearchitected. There are even some applications who don’t make sense in the cloud.

I encourage everyone to give virtualization a look. Every organization who has multiple environments for development, QA and UAT would benefit from virtualizing them. There are many software packages and platforms out there that are easy to use and some that are even open source. But before you start down the cloud path, make sure you do your due diligence. Are you ready for the cloud? Sometimes the correct answer is no. And that’s OK too.

Dodd Frank Implementation – Great Case for Agile Development

The Dodd-Frank Act (DFA) has been a major disruptor in 2012, especially in the energy industry. For those of you who are not familiar, DFA has created a set of pretty extensive external reporting requirements, both for trade life cycle events but also for data aggregations. For some organizations who are Swap Dealers, these reporting requirements are a major burden, requiring external reporting of trade life cycle events, such as execution and confirmation, in time frames as short as 15 minutes. In the financial services world, these reporting burdens are not as big of a leap, as the infrastructure and generally accepted practices of external, real time communication to facilitate straight through processing is prevalent. The energy and commodities world is not on the same level of sophistication, both because commodities trading tends to require much more bespoke and complex deal modeling, but also simply because they never needed to report events externally real-time.

In addition to these requirements, there has been volatility in the rules. Certain rules, such as Position Reporting, have been vacated (for now) leaving many projects in mid-flight. Other rules, such as Swap Data Repository reporting (Part 45), had data interfaces and workflow definitions offloaded on multiple vendors (p 2139), resulted in a very fragmented ecosystem where many-to-many data mappings and formats were required for different asset classes. Additionally, SDRs are implementing their systems during these different rule changes and clarifications, resulting in a fairly unstable integration task. This type of work is perfect for agile development.

  • Short sprints would allow you to push out functionality in short time frames, giving the team a natural checkpoint to make sure the functionality was still in-line with the latest legal opinion or CFTC change (Physical Options, anyone?). Every two weeks, the team can stop, demo the functionality they have built and receive feedback for the next sprint. Volatile requirements require a tight, frequent feedback loop. If you are building a huge technical spec document for a DFA implementation, you are toast.
  • Code demos naturally push end-to-end testing, giving the users a look at the real implementation rather than waiting until the last minute. The users can make adjustments at an earlier stage, reducing project risk and increasing customer satisfaction.

I would highly encourage all the companies who haven’t started your DFA efforts to look to agile to manage the project. Your developers and your users will thank you for it.

© 2017 Stephen Nimmo

Theme by Anders NorenUp ↑