Stephen Nimmo

Energy Trading, Risk Management and Software Development

Category: Software Development (page 1 of 2)

Postgresql Partitioning Trigger and Function

I’ve been working on a project that needs a time series database and we are building in Postgresql so I needed to figure out how to partition data. Having done this a number of times on Oracle, I was curious about the comparison and after an hour or poking around the internet, I settled on an implementation that was fairly elegant.

First the time series table.

CREATE TABLE main.meter_usage_ts
(
  production_meter_ts_id bigserial PRIMARY KEY,
  meter_id bigint NOT NULL references main.meter(meter_id),
  start_timestamp timestamp NOT NULL,
  end_timestamp timestamp NOT NULL,
  timezone text,
  value numeric(10, 4),
  uom text  
) WITH ( OIDS = FALSE );

We will want to create partitions for this data by the month to help with query performance. From here, there are two choices: create all of your partitions from the start and run a maintenance job that ensures there are partitions available to hold data. Or you can write a stored proc that checks for the presence of the partition before inserting and creates it if needed. I chose the second way.
CREATE OR REPLACE FUNCTION main.create_production_meter_ts_partition_and_insert()
 RETURNS TRIGGER AS $BODY$
DECLARE
 partition_date TEXT;
 partition TEXT;
BEGIN
 partition_date := to_char(NEW.start_timestamp, 'YYYY_MM');
 partition := TG_TABLE_NAME || '_' || partition_date;
 IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname = partition)
 THEN
 RAISE NOTICE 'A partition has been created %', partition;
 EXECUTE 'CREATE TABLE ' || TG_TABLE_SCHEMA || '.' || partition || ' () INHERITS (' || TG_TABLE_SCHEMA || '.' || TG_RELNAME || ');';
 END IF;
 EXECUTE 'INSERT INTO ' || TG_TABLE_SCHEMA || '.' || partition || ' select $1.* ' USING new;
 RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

CREATE TRIGGER create_production_meter_ts_partition_and_insert_trigger
BEFORE INSERT ON main.production_meter_ts
FOR EACH ROW EXECUTE PROCEDURE main.create_production_meter_ts_partition_and_insert();

The cool thing about this is that there are no maintenance jobs to run (and to forget to run) to create the partitions. Bad thing is the IF check runs on every insert.

Extra bonus – this is fairly generic if you are partitioning by date.

Definition of Done

done When a developer says they are done, what does that mean? I’ve worked on many different client projects and it’s been slightly different every time. Developers are often left to take responsibility on ensuring a task is complete, however many times they are left with subjective, nuanced and changing job descriptions based on the whims of project managers or the business. A good Definition of Done not only brings value by ensuring completeness of work, but it also shields the developer from the random, tedious requests. Here’s a few principles on creating your team’s definition of done.

  • Write it down, communicate it out and revisit it every sprint. Do not assume the team members know and understand what it means to complete a task.
  • Use it during estimations and sprint planning as a baseline for what needs to be done for each task, but don’t get too rigid with it. It’s a guideline, not a mandate.
  • Don’t try and get too enterprisy with it. While a similar pattern or template could be used across teams, make it unique for the technical team with the details needed to make the process easy and complete. Need to update the user docs? Put a link directly to the document in the checklist. Again, make the right thing to do the easiest thing to do.
  • Peer review the work and create an incentive for the team members who have the most reviews per month. It doesn’t have to be big, but giving out a $25 gift card to Chili’s or buying them a funny tech t-shirt can do a long way to encouraging the behavior.
  • Good task flow is a balance between executing on parallel work streams and not wasting people’s time. Key checkpoints can save the developer time and effort.

Now that we have a good baseline on how to use a DOD, let’s take a look at a basic example. This is the template I take to most projects as a starting point and adjust according to the particular companies details.

  1. Working Code – This should go without saying, but delivering quality code satisfying the tasks or issues is the highest priority.
  2. Unit Tests – Good unit tests covering the core functionality or bug fix goes a long way not only for the current effort, but for use throughout the life of the code base.
  3. Integration Tests – Does the code changes have any effects on external interfaces or data integration? If so, a decent set of integration tests to cover the functionality goes a long way.
  4. Automated Regression Tests – update or add new automated regression tests to cover the functionality. These should be plugged into the existing automated regression set and should serve as the baseline of how developers prove their code works and meets the requirements of the task.
  5. Checkpoint 1: Peer Review – At this point, the developer has completed the code, written the tests and tested the change in the larger context by running the accepted regression suite. Prior to contact with the business, the changes should be reviewed and accepted by a peer or the team as a whole. This peer review should be handled in less than 10 minutes but goes a long way to ensure people’s work is holding to team standards and expectations.
  6. Checkpoint 2: Business Review – After the peer review is complete, the business owner should be notified and provided with the documentation to justify why the task should be considered complete. Getting a signoff from a user prior to deployment to downstream systems (QA, UAT, etc) saves huge amounts of time related to feedback loops. This business review should be a final checkpoint, not the only checkpoint. The technical resource should be communicating with the user as much as needed throughout the development process, whether it be UI design, data validation requirements, or other questions.
  7. Updating Technical Documentation – take 15 minutes and update the docs. In the team’s DOD, you could even have a list of the docs to be updated. ERD, process diagram, screen validation documentation, etc.
  8. Updating User Documentation – take 15 minutes and update the docs. In the team’s DOD, you could even have a list of the docs to be updated. If the UI changed, update the screenshots and provide the new instructions.
  9. Update Task – Once all is completed, the final step is updating the task. Again, it doesn’t take long (15 minutes) to write a good explanation of what was done, what was touched, what tests were written or changed, signoff info and getting the documentation updated. At this point, the task’s actuals should be updated to show the time spent and document any anomalies. Use this time to also push anomalies to the project manager to be reviewed and possibly get them addressed, if they might become longer term issues.

Some developers and team members may look at this list and see nothing but busy work. While it can sometimes seem tedious and boring, following these processes actual protect the development team from volatile business expectations and gives a kind of social contract allowing the developers to properly cover all needed aspects of delivery. It’s also a great tool to have in hand during sprint planning or general task estimations – a small change in code may take 2 hours, but updating the documentation, writing tests and updating the users to impact changes may double that time to 4 hours. Having the DOD helps everyone baseline their tasks and estimate their throughout more accurately.

The Best Tool in Software Development: Automated Regression Testing

When I think of automated testing, I think of the old proverb: “The best time to plant a tree was 20 years ago. The second best time is now.” Automated regression testing can be your greatest timesaver: more than good architecture, more than good requirements and more than hiring overpriced genius talent. It’s the thing that reduces downtime, eliminates cycle time, and gives you the sense of comfort your cell phone won’t be ringing at dinner tonight. And it doesn’t take significantly longer to write an automated test than it does to perform a similar manual test only once or twice. Writing automated regression tests are like buying a tool for $20 once instead of renting a tool every month for $15. But even that analogy doesn’t work, because the benefits are actually exponential due to the applicability of the tests on normal and critical releases.

Let’s take a case where a team of 5 QA resources perform manual regression testing on an application and the full testing cycle can be completed in two weeks. They receive a release candidate and perform work on Monday with a goal of releasing the software for the next Friday. Sounds simple, right? During the 4th day (Thursday) of testing, a significant bug is found. The QA team sends the bug back to the development team who fix the bug in 2 days (Monday morning), resulting in changes across significant portions of a core workflow, with resulting changes affecting the database, application and even changing the tests themselves as part of the bug fix involved changing the workflow slightly.

  • What is the QA team supposed to do after the bug is submitted? Do they continue to work on testing other parts of the application? They really don’t know at that point what parts of the application are going to be affected by the code changes for the bug fix.
  • The QA team get the fixed release. Does the team need to start testing from scratch? To perform a full regression and make sure the bug fix didn’t inadvertently break something they already tested, they will need to start from scratch. Any other choice creates unknowns and that is where downtime lives. Some may make an argument that the first 2 days of testing where on completely unrelated modules, however if a developer accidentally checked in something wrong or a configuration change didn’t get redeployed with the newer version, then the opportunity for failure is there.
  • The newly delivered software starts testing again and on day 4 (Thursday), two significant bugs were found. At this point, the software that should be delivered tomorrow is now not even going to start their full 2 week regression test until at least the next Monday (based on developer estimates) so the release is already two weeks late. Any other bugs could cause similar delays. At this point, the business users are forced to make a decision: release the software with incomplete testing and hope the changes made during the bug fixes don’t affect previously tested modules. Hoping for best cases is a terrible place to live.

Let’s take a case for a critical bug fix. A recently released upgrade is broken due to a bug in the order processing workflow for your largest client. Your largest client cannot perform their business! The ticket is opened and the development team works 20 hour days for 3 days to fix the issue. By the time the fix is provided, it’s been coded by developers who haven’t slept and have been given permission by management to do anything regardless of what it is to get this fixed. Overworked and under intense pressure, with code commits coming from 4 different developers, the development team gives a new release that they say “should fix the problem” even though they only tested it once or twice. The QA team receives the new release of the software.

  • Are they supposed to perform a full regression test? Two weeks until your largest client can process their orders? Do you want to lose them?
  • What part of it are they supposed to test? Obviously they would test the particular client can process their orders but do they test other clients? How many other clients? What processes were touched by the code changes? At this point, the business users are forced to make a decision: release the software with incomplete testing and hope the changes made during the bug fixes don’t affect previously tested modules. Hoping for best cases is a terrible place to live and eventually, hope does not survive.

How to Start?

First, make sure there is emphasis on both parts of the term “automated” and “regression”. If the process is not complete, consistent and idempotent – meaning the results should cover all functionality and should be the same given the same inputs every time – and able to be run in a short amount of time, then it’s not going to be of value. Most of the time, these automated regression tests can be linked to an existing build or release process but don’t get bogged down with automating your entire release flow – even an automated regression test that requires a user to go and point it at an environment and click a button is better than the alternative.

If you are starting a greenfield application, it’s really easy to get this done. However, most of the time the application that needs an automated regression the most is the lumbering technical debt wasteland of your most antiquated legacy systems. The goal should be totally a kaizen approach – every time a developer commits code, make a new regression test. If 5 regression tests are written every week, you will end up with over 65 regression tests in less than a quarter and after a year, with 250+ regression tests, you should have significant coverage. Don’t care about where you start, just recognize the value and get it done. Even if you can automate half of the regression tests needed to release, it still creates 24 weeks of slack to allow your QA team to work on writing more tests or even more regression tests.

Where to Start?

Dave Ramsey. The guy is a genius not because his total money makeover is complex, but because it is simple. Take all your debts (in this case, a full list of everything you need to get tested) and line them up in terms of value. What one test can you write today that would allow you to sleep better at night after a deployment because you know that piece works? That’s what you start with. Make a team goal to write 5 tests a week and before you know it, you’ll sleep like a baby. The added benefit is the snowball effect this will bring to the team. More automated regression means less critical bugs which means more throughput for the team which means more time allocated to writing regression tests.

A small note for the business, if you are always wondering why it takes so long to release software, this is usually the bottleneck. If bugs are consistently being released or you ever have the same bug happen that was supposedly fixed in previous releases, then go ahead and throw the automated regression tests into your backlog.

 

Choosing an Enterprise Trading Communication Protocol

In building modular software for energy trading organizations, each module is designed to perform a particular set of tasks, usually correlated to the offices: front, mid and back. The main purpose of creating modular, independent systems is the ability to target a particular set of features without having to account for everything upstream and downstream the IT value chain. Building software for exchange order management requires a different set of behaviors and characteristics than creating invoices for your residential power customers. Building the entire value chain of an energy trading shop into a single application may work for startups and limited scope, but will quickly decay in the face of  scale. To create scalable systems, modularity is required but with modularity comes the requirement of data interfaces.giff30eNVU7z7

Building data interfaces and APIs is an important part of building these types of software. For many different reasons, architects and developers involved will spin their wheels creating a new, completely proprietary way to describe the nouns of their system, such as trades or positions, instead of using one of the multiple, proven protocols already available for free. Usually the arguments against using the industry standard protocols internally are around perceived unique requirements or considering these fully baked protocols to be overkill. However, the long term implications of building a proprietary communications protocol will create an ongoing and expensive technical debt within the organization as the scope of the protocol expands and disagreements between system owners about the “right” way to model a particular interaction. It’s much easier to stand on the shoulders of industry giants and leverage years of trading experience by using proven and widely adopted protocols, while also creating an externally managed mediation point for disagreements regarding how and what gets communicated.

Right now, there are three main standards for modeling communication for energy trading. These standards have gotten quite a face lift in the past 4 years, due to their expansion related to changes created by Dodd-Frank and other regulatory legislation. These standards includes both format but also some standardization of content, allowing reuse of enumerations to describe certain trade values, such as settlement types or asset classes.

  • FIX / FIXML – Financial Information eXchange – This protocol is the most mature, as it was born from equities trading and other models that are purely financially settled. However in the recent years, the protocol has expanded into several different venues of commodities trading including being the protocol of choice for ICE’s Trade Capture and Order Routing platforms, as well as almost all of CME’s entire portfolio of electronic connectivity. This model is more normalized, in the sense of having multiple reference points to different related data. Instrument definitions could be communicated using different timing or venues while trades may just refer to those instruments using codes, allowing for faster communication.
  • FpML – Financial Product Markup Language – This protocol was recently used by DTCC to implement their SDR and while it’s much more robust in it’s implementation, it has quite a bit of a learning curve. The communications structure lends itself more toward each unit being a complete and total description of a transaction, duplicating standard data across transactions such as product or counterparty information. This protocol is XML based and much more verbose, but allows finer grained controls around things like date logic for instruments. The protocol also has multiple versions, tailored to specific needs of the organization.
  • CpML – Similar to FpML, this protocol is named directly for commodities descriptions and although it’s more widely adopted across the pond in Europe around EFET, it’s value holds for US based organizations.

But picking a standard protocol is only the first step. There are some additional concerns one should keep in mind when implementing these protocols to reduce the amount of headaches later.

  • Treat internal data consumers exactly the same way you would treat external data consumers – like customers.
  • Create a standards committee that has power to define and model the system interactions, but make sure the committee can make quick decisions through informal processes.
  • Always force any requests to extend the model for proprietary requirements through a rigorous standard to ensure those designing the interfaces aren’t simply doing the easy thing, rather than the right thing. I am always amazed about how quickly an organization can throw out the need for technical standardization when faced with a small decision to adjust an existing business process.
  • Broadcast the event generically, allowing listeners to determine what they need and allowing them to throw away what they don’t need. All else being equal, it’s easier to broadcast everything rather than open up individual data elements one by one.
  • Create and use common patterns for interfacing. Having one system be request-response and another be messaging based will create just as many issues as proprietary XML protocols.
  • As always, make the right thing to do the easy thing to do for developers. Make an investment in tooling and training for the stakeholders involved to ensure success.

Technical Debt Snowball

Most Americans are familiar with a personal finance strategy colloquially known as “Dave Ramsey“. The course is actually called My Total Money Makeover and is a regimented and proven strategy for getting your debt under control and moving into a more financially sound position. The foundation of the course relates directly to a single set of tactics collectively referred to as the “Debt Snowball”. Using this tactic, people line up all their outstanding debts from smallest to largest and begin to pay extra money into the smallest debt first, while paying only the absolute minimum to all the other debts. Once the smallest debt is paid, then take the amount you were paying on the first debt, add it to the minimum amount on the second debt and start paying on the second debt. Like a snowball rolling down the hill, with each debt conquered the total payment grows bigger and gives an ever increasing leverage to knock out the larger debts. There is also a huge psychological boost for the average person by starting with the small debts first in creating momentum and quick wins, which are imperative for those who have trouble forming habits without positive feedback. From a purely financial sense, it’s actually better to pay off the highest interest debt first, but the differences in percentages may not justify the loss of momentum and positive feedback needed for the average person struggling with debt issues to be successful.

Surprisingly, when you start the process, there is a first step that doesn’t sound intuitive, but is absolutely imperative for those people living in a situation of crippling debt. The first step is the emergency fund. Before any debt is paid, the individual is highly encouraged to create an emergency fund of $1000 to create a cushion for any unexpected debts or issues that may arise over the course of your journey. The reasoning is this: People living under the immense pressure of not knowing if they will be able to pay their electric bills or if a car repair might cause them to lose their job are simply unable to make good decisions. The pressure is so intense at that point, that most people’s ability to make strategic decisions get short circuited and they only think about how to relieve the pressure, even temporarily and by any means necessary. In addition, people living in such conditions have a strong tendency to want to escape the pressure in other ways, and the escapes tend to revolve around more bad money decisions, such as impulse purchases to make them happy or throwing money at guilty pleasures like food or drink. The first step of Dave Ramsey is to relieve this pressure and create a space for strategic thinking.

In any business with large IT operations, there is a very real concept of technical debt where the organization is making technical decisions in the present that may require some level of effort in the future to “correct” the decision. From an enterprise sense, even some of the major product decisions are creating technical debt they may not even be aware of. To use an example, if you choose to use Microsoft Outlook for your company’s email, the decision is creating technical debt related to upgrade paths – the organization will eventually be forced to upgrade or change platforms. For custom application development, technical debt has a direct correlation to certain decisions such as choosing not to automate your regression testing or not creating unit tests. These decisions could be due to budget constraints and simply not having enough staff to perform the activities, but you’ll end up paying for it later in either time or lost productivity.

When your IT organization creates too much technical debt, either through their own lack of discipline or through direct pressures placed on them by decision makers, it will eventually reach a point of becoming crippled operationally. The staff gets so bogged down with production bugs and performing low value activities such as manually testing code, no new features or business can happen. The IT staff start living with the illusion of choice – do we spend out time today getting our CRM back up so we can conduct normal business, or do we work on the new website designed to bring in new customers? There is no choice there. When things get this bad, there will usually be some sort of change that will allow for some movement to be made on paying down the debt. This could come in the form of cancelled projects, contractors for hired muscle or even outsourcing an entire system replatforming if the debt is too large (i.e. IT bankruptcy).

http://xkcd.com/1205/

http://xkcd.com/1205/

When the company becomes aware of the issues, the ensuing prioritization discussions begin. How do we pay down our debt? Some may suggest using a purely financial model – eschewing the aspects of momentum for prioritizing work for the issues having the largest impact. However, using this type of prioritization means the organization will miss out on a key feature to the snowball – it needs to learn how to pay off technical debt and not create new debt. Taking on a huge new project in an environment of already crippling technical debt pushes off the much needed positive feedback needed to reinforce what is being done is good for the company. After six months, the key stakeholders may forget about why it’s important and will refocus efforts on daily firefighting. By prioritizing some smaller and easier automation projects, monentum and feedback loops can be created, resulting in a better chance of long term success incorporating good software delivery practices as a part of normal culture.

The organization should start the attack on technical debt using continuous delivery concepts. First, it needs to create that emergency fund. For the most part, the quickest way to get your organization out of panic mode (i.e. emergency production support) is through the creation of automated regression testing. Being able to fix bugs and not introduce new bugs is the debt snowball equivalent of knowing your lights won’t be turned off. It creates a cushion of confidence that developers can fix current production issues without introducing new debt. This is also the start of the technical debt snowball’s momentum, as the IT staff can begin to start fixing actual bugs using the time they were previously spending identifying and testing old support issues. Every time a new automated test that saves 5 minutes a day is created, 6 days of slack is created. Think about 100 new automated regression tests, each saving 5 minutes a day and you’ll see the snowball. Automated regression testing should always be the starting point for any debt paydowns – even if you are replatforming! Once the automated regression snowball starts knocking out those small debts, then the time saved can be used for tackling some of the larger pieces of automation such as creating automated build and deployment scripts. This will create the cushion needed for your ops team to start monitoring the services in a more proactive manner, leading to additional uptime as the ops folks can tackle issues before they become outages.

The cushion created by automation can then create opportunity for the organization to tackle larger strategic issues. To be clear, executives don’t want to talk about a 5 year vision when core systems have been down three times this week. But once the snowball starts rolling, it’s hard to stop it. The momentum created from continuous delivery creates a fervor from the business to create more opportunities for optimization of processes, which will then allow the IT staff to work on projects with actual business value. Everyone in the organization begins to recognize the need to pay down debt regularly and give the developers and operations more budgeted time to perform those activities needed to reduce technical debt, such as writing automated tests and refactoring bad code. If you are current working in a environment where the daily norm is daily dire drills and emergency releases, it can be excruciating and ultimately the employers will start losing key staff. Ultimately, taking care of technical debt is something that will be addressed. The only question is how painful it’s going to be.

Production-ready standalone REST using Dropwizard

There has been a large movement to REST interfaces because of the shift to new mediums, such as mobile, as well as the overall shift to more client-side, in-browser functionality using HTML5/JS. While the server side processing of views still have their place, more and more users are wanting to see less full request-response interfaces, and more ajax and push functionality in their web applications. This is almost creating a new era of a two tiered architecture, shifting more of the domain and business processes closer to the user and leaving data access and other pluggable functionality on the server. If you squint your eyes, you can see the parallels to the Powerbuilder/stored procedure paradigms but the difference now if these technologies are incredibly scalable.

Thanks to the large movement to open source technology used in the big internet shops (Facebook, Twitter, blah-blah-blah), the lowly developer doesn’t have to look far to grab a good starter kit for building some pretty cool applications. One of the latest ones I found is Dropwizard. Per the website:

“Developed by Yammer to power their JVM-based backend services, Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done. Dropwizard has out-of-the-box support for sophisticated configuration, application metrics, logging, operational tools, and much more, allowing you and your team to ship a production-quality HTTP+JSON web service in the shortest time possible.”

The cool thing about Dropwizard is the fact the services are turned inside out. Instead of the software being embedded in a container, the services usually associated with containers are embedded in the application. It’s a J2SE-based HTTP/JSON/REST framework that doesn’t require the deployment and installation of containers or web servers. The reason it’s cool is scalability part of it. If you want more services, you can horizontally scale on the same machine by simply starting some additional instances.

The other cool thing is that their documentation to get started is damn good. While there are tons of blogs out there that really serve a great need in further refining the existing documentation, Dropwizard needs no help. So I am not going to try to reinvent the wheel here. Go get started: http://dropwizard.codahale.com/getting-started/

After doing some development with the platform, here are some observations:

1) It’s really good for REST based services. But if you need to serve up any UI stuff outside of a pure HTML5/JS model, then you are going to have to jump through some hoops to integrate any view technologies into the stack such as JSF. Sure it has some cool capabilities to add assets to the web container to be served up, but once you start going there, there are some hurdles and fine grained knowledge needed to get things to fit right.

2) Security model requires some internal development especially if you want to do something other than Basic HTTP authentication and can get hairy. I tried multiple authentication/authorization models and even tried extending some of the existing Framework classes. While it can be done, it’s not easy.

3) No built in Spring integration and no Java EE CDI stuff. Not hard to initialize and get it into the model.

4) The stack is a best of compilation of some of my favorite things: Jackson, Hibernate Validator, and others. I also got introduced to some new stuff like JDBI. If anything, Dropwizard gets you to possibly take a look at some other ways to do things and perhaps might even change your development stack.

Overall, it’s worth a weekend for the tinkerer and a POC after that at your shop, especially if you are deploying a mobile app with a lot of users.

Go follow the developer @Coda

Couple of Dropwizard links I found helpful:

http://brianoneill.blogspot.com/2012/05/dropwizard-and-spring-sort-of.html

https://speakerdeck.com/jacek99/dropwizard-and-spring-the-perfect-java-rest-server-stack

http://www.gettingcirrius.com/2013/02/integrating-spring-security-with.html

Putting software inside other software

I’ve lived most of my technical life in a web based application. Because I am a java guy and don’t like Swing development, when I build something, it usually had a web based interface. There are two really great things about web UIs: anyone can use them and they are horizontally scalable.

When you live in web applications, you are also mostly living in application servers. There are so many names and flavors you can get lost really fast. I came to age during the popularization of J2EE and did quite a bit of work on the behemoth platforms like Weblogic and Websphere, which I don’t think anyone can still justify why we used them. Cutting your teeth on EJB 2.0 CMP entity beans is not the most pleasant experience. Couple that with my experience with CORBA before that, and you’ll understand why I love Spring.

Recently I have been diving back into the technology, taking a particularly longer look at JEE technologies. I’ve been using JSF for years but always spring-backed. I’ve taken a liking to the JBoss AS 7 capabilities and even the TomEE server. But the question still begs to be answered: why do we need these bloated application servers anyway?

With REST and simple embeddable web servers such as Jetty, you can get some of the goodness of a J2SE environment and none of the snowflake configurations required by the “standard” app servers. There is even some open source tools at your disposal, most notably Dropwizard, which allows you to have some pretty robust standalone web resources without the containers (well, the containers are still there but they aren’t as instrusive). These platforms do have their drawbacks, requiring some pretty extensive work to handle security and if you need a full blown UI using JSF, it can get painful.

What the industry needs is an easily deployable standalone application model that doesn’t require extensive custom configuration. I need to build an application, deploy it over there, create some network rules and start it. And if I need it to cluster, it can be self aware of it’s clustering environment (Hazelcast has spoiled me). When your application developer is spending more time building plumbing and security rather than domain objects and service layers, it’s a problem. Don’t get me wrong, there are plenty of useful and relavent cases where large application server infrastructure is not only necessary, but can be a strategic advantage in terms of scalability and ease of deployment. I just wish there were more choices.

Are you ready for the cloud?

The “Cloud” is one of the most overused and misunderstood buzzwords in technology today. For most people who have watched too many Microsoft commercials touting their cloud integrated operating systems, it’s this abstract resource in the sky that everything magically connects to and things happen. Most people do not understand that it’s simply an external network of computation resources (computers). Just like you have your server at work with all your spreadsheets on it (the ubiquitous S: drive or something), the public cloud is simply a set of computers located at a data center that you have access to.

But this is where the true cloud conversation starts, and there are so many other aspects to the cloud people do not understand. To understand cloud computing, especially public cloud such as Amazon EC2, you must first understand virtualization. To understand virtualization, you must need to understand operating systems and how they work with your computer. Essentially, a computer is made up of a bunch of different components (hard drive, memory, network card) that talk to each other through a common physical pathway(motherboard). But there is another pathway required for these communications and that is the operating system (OS). Each component, such as the hard drive, only knows about itself and has an interface in which it allows software to interact with it via the physical connection, and this software is typically the OS. When most people buy a computer, it has a single OS on it that single handedly controls all the hardware interactions.

Virtualization simply adds an additional layer to allow multiple OS access to the same hardware. Instead of having one OS running on a computer, you can have two or more running as a virtual machine (VM). The virtualization software acts as a traffic cop, taking instructions and other interactions from each OS and ensuring they all get access to the hardware devices they need, but also make sure that each of the devices get the needed information back to the correct OS. There are lots of examples of virtualization software, most notably VMWare and Virtualbox, that will allow users to run multiple OS on their machines. You can run an instance of Ubuntu on Windows 7. There are huge benefits to be gained from this, but when it comes to public cloud computing, the main benefit is shared hardware and abstraction of computing resources away from physical hardware.

Once you understand how virtualization works, it’s not a big leap to realize the public cloud is simply an automated virtualized environment allowing users to create new VMs on demand. The user doesn’t have to worry about having enough hardware available or even where that VM is located. They simply create an instance of a VM by specifying attributes of how they want the machine configured, such as processor speed or memory capacity, and don’t care about where or how. The public cloud is simply a manifestation of a concept that has been maturing for quite a while – turning computation resources into a homogeneous commodity rather than a specialized product.

This is the point to where light bulbs start turning on above the heads of executives. They start looking at these opportunities to use generic, commoditized computing resources and remove the risks and costs associated with maintaining and managing data centers. All of the hardware that sits unused for weeks at a time because it’s only needed for special cases, like emergency production support releases, can be sunset. We can build performance testing environments in a single day and then delete the entire thing by the time we go home. The possibilities are endless.

But let’s be clear about something. There is something about public cloud infrastructure that makes it special. It’s not the virtualization software. It’s not the hardware used. It’s the people.

Public clouds like Amazon EC2 have the best and brightest operations engineers on the planet, creating automated processes behind the scenes so that users like us just need to click a button. It’s not easy. Their environment is a hurricane of requests and issues that most people can’t dream of. They manage a half-a-million linux servers. 500,000 servers. Most people ask how they do it and the answer is simple: they automate everything they can. Implementing that simple answer is where most people start to run into issues. Luckily, Amazon hires the best ops people in the world and probably pays them a small fortune to do this, both of which is simply not available to most businesses. Public cloud is about standing on the shoulders of the best operations talent in the world and taking advantage of their automation procedures.

Remember those light bulbs I referred to earlier above the heads of executives? Let’s take a sledgehammer to them with a single statement: All your data that you put into the public cloud is on the shared hardware. Your CRM data that you have in your cloud database? It could be sitting on the same drive and the same platter as your competitor. Your super-secret computation process could be sharing processing time with Reddit on the same CPU. The public cloud means we all share the same hardware. While there are quite a few security measures to make sure the virtualized segregation stays in place, we all know what typically happens – all secure data is deemed not cloud worthy and our virtualized hopes and dreams die with a whimper.

Until someone says the words “Private Cloud”.

Here’s the problem with “Private Cloud”. The public clouds are good because of the people and the culture. It’s not the tools. In fact, I would be willing to bet that 99% of the tools used by Amazon could easily be purchased or found via open source. Most organizations simply don’t have the resources, the processes, or the intestinal fortitude to engage in such an endeavor. You need three things to be successful: a culture of automation, talented hybrid operations and developers willing to blur the lines between coding and deployment and tools to automate. You can buy the tools. You can buy the people. You can’t buy culture.

Let’s get past the limitations I stated earlier and create a hypothetical company with it’s own private cloud. Unless your software is built for the cloud, your not really buying anything. When I say built for the cloud, imagine being able to scale an application horizontally (think web application) by adding new web servers on demand. Here’s a basic list of how to create a VM from scratch and take it into a production cluster:

  1. Create a new virtual machine.
  2. Install the correct image/OS.
  3. Automatically configure the VM to the specifications needed for the application (install Java, run patches, etc).
  4. Install any containers (JBoss, Tomcat, whatever) and configure them automatically (JMS, DB Connections, Clustering with IP addresses, etc).
  5. Deploy an EAR or WAR from your internal build repository and have it automatically join any existing cluster seamlessly.
  6. Configure the network and other appliances to recognize and route requests to the new server automatically.
  7. Do everything just stated in reverse.

Unless you can do something like this automatically, then you aren’t cloud. You are virtualized. Not that simply being a company that utilizes virtualization isn’t a great first step. There are many uses of virtualization that could provide many benefits to a company quickly, such as having virtualized QA environments that can quickly be initialized to run emergency production support releases. Virtualization is a great thing by itself, but virtualization is not cloud.

Secondly, the fourth and fifth bullet points are where most software shops get caught up. Their applications are simply not built for it. When the architects laid the groundwork for the application, they didn’t think about having the ability to quickly add new instances of the application to the cluster such as not using DNS to handle routing requests. There are some decisions that are made or even not even addressed that are core to an application that can’t handle these types of environments. It’s a square peg in a round hole and while some applications can be retrofit to handle the automation, some will need to be rearchitected. There are even some applications who don’t make sense in the cloud.

I encourage everyone to give virtualization a look. Every organization who has multiple environments for development, QA and UAT would benefit from virtualizing them. There are many software packages and platforms out there that are easy to use and some that are even open source. But before you start down the cloud path, make sure you do your due diligence. Are you ready for the cloud? Sometimes the correct answer is no. And that’s OK too.

Demo Code: Create Persistence Jar using JPA

I love keeping my repository code in a single jar, isolated from all other code. Persistence code should be portable and reusable as a library specific to a database or even a schema. This wasn’t always the easiest thing to do, especially in an ecosystem where the library may run in a Spring based webapp, a swing gui and a Java EE EJB application. Here’s the template code for how to get that ability.

First, let’s look at the basic EntityManager usage pattern. There are much more sophisticated ways of doing this but I’ll keep it simple for my own sake.

[java]
//Get the correct persistence unit and EntityManagerFactory
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("demoManager");
EntityManager entityManager = entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
//Create an object and save it
entityManager.persist(new ApplicationUser());
//We are just testing so roll that back
entityManager.getTransaction().rollback();
//Close it down.
entityManager.close();
entityManagerFactory.close();
[/java]

JPA Persistence is driven from a file in your classpath, located at META-INF/persistence.xml. Essentially, when creating the EntityManagerFactory, the Persistence class will go look for the persistence.xml file at that location. No file? You get a INFO: HHH000318: Could not find any META-INF/persistence.xml file in the classpath error. Eclipse users: Sometimes you gotta clean to get the file to show up for the junit test. Here’s a simple persistence.xml that shows how to use JPA outside of a container.

[xml]
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">

<persistence-unit name="demoManager" transaction-type="RESOURCE_LOCAL">
<class>com.stephennimmo.demo.jpa.ApplicationUser</class>
<properties>
<property name="javax.persistence.jdbc.driver" value="org.hsqldb.jdbcDriver" />
<property name="javax.persistence.jdbc.user" value="sa" />
<property name="javax.persistence.jdbc.password" value="" />
<property name="javax.persistence.jdbc.url" value="jdbc:hsqldb:." />
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect" />
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
</properties>
</persistence-unit>

</persistence>
[/xml]

Notice when you create the EntityManagerFactory you need to give it the name of the persistence-unit. Rest is pretty vanilla, but if you need some additional explanation.

Next, let’s look at the basic JPA object.

[java]
@Entity(name="APPLICATION_USER")
public class ApplicationUser implements Serializable {

private static final long serialVersionUID = -4505032763946912352L;

@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
@Column(name="APPLICATION_USER_UID")
private Long uid;

@Column(name="LOGIN")
private String login;

//Getters and Setters omitted for brevity sake

}
[/java]

And if you want to use the JPA in a container, here’s a simple example of how the persistence.xml would change.

[xml]
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0">

<persistence-unit name="demoManager" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/DefaultDS</jta-data-source>
<class>com.stephennimmo.demo.jpa.ApplicationUser</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect" />
<property name="hibernate.hbm2ddl.auto" value="create-drop" />
</properties>
</persistence-unit>

</persistence>
[/xml]

That’s basically it. The most painful point comes if you are trying to use a persistence jar inside EJB jar, it REQUIRES you to list out the classes in the persistence.xml.

As always, demo code available at my public repository.

http://code.google.com/p/stephennimmo

DevOps culture can create a strategic advantage for your company

I’ve recently stumbled across a great podcast called DevOps Cafe. These guys basically push out about a show a week, interviewing different people from the DevOps community or simply discussing DevOps culture and why it’s great. DevOps is a software development culture that stresses communication, collaboration and integration between software developers and operations. DevOps blurs the line between development, deployment and system administration creating a more team based approach rather than throwing things over the proverbial wall. DevOps is something that has existed for many years now, but like so many other things, there is finally a nomenclature and language being built around it because of its adoption into the enterprise arena.

What are some of the things this solves? If any of these ring true in your organization, you might want to take a look at DevOps.
  • Does it take weeks to deploy a new build to production?
  • Does the code deployment require manual intervention such as ops updating a properties file?
  • Is your QA team manually testing some production support patch when they should be busy working on regression tests for the next build?
  • After any release, do you must have a period of  “all hands on deck” because you aren’t sure if something is going to go wrong?

What does DevOps culture bring to your organization that can create an advantage?

  • Agile development – small sprints create small deliverables that provide small, manageable change. Lots of small changes rather than huge changes can lower the risks of deployments and help get truly needed bug fixes and enhancements out the door. Hey trading organizations – this should be something you are very interested in as you typically have lots of small changes!
  • Continuous Delivery – your dev teams should be able to push the latest code to production at all times. There are no more huge branch and merge operations, because development work is broken down into many small chunks and pushed continuously and tested continuously. I can’t tell you how many shops I have worked in that take weeks to deliver to production already completed code, only to have that code be obsolete or need changes prior to going live. Moving to production could be a weekly thing and something you do every week should be easy, right?
  • Automation – if you do something more than twice, you need to automate it. Developers should be writing code, not creating builds. QA teams should be creating repeatable regression tests, not manually testing the UI again. Operations should be writing new optimized deployment scripts, not manually patching servers. And they should all be working together to push the product to production – a failure on any part is a failure on the whole. We should all be in this together.

How do you get started?

  1. DevOps is a culture, not a tool set  A new paradigm needs to be established and continuously reinforced – we are all in this together.
  2. From a development side, stop branching and starting checking in on the trunk. Create a new build on every check-in and have that build fully regression tested. EVERY TIME you check in, the code should be production ready.
  3. From the ops side, start automating the environment by using virtualization. Ops should be able to add a new node to a cluster with the push of a button. Ops should be able to deliver builds to a host with the push of a button.
  4. Your QA team needs to refocus their efforts on producing repeatable regression tests on the builds. Again, this should be repeatable.

Here’s some of my favorite tools for the jobs (Remember, I am a java guy).

  • Build with Maven.
  • Manage your artifacts with Sonatype Nexus. I also like Apache Archiva.
  • Automate your testing with TestNG, JUnit. Create regression testing for webapps using Selenium.
  • Deploy using Puppet. I also like Jenkins. Depends on how fancy your infrastructure is. Puppet does a lot with infrastructure management as well.
Older posts

© 2017 Stephen Nimmo

Theme by Anders NorenUp ↑