Saturday, January 22, 2011
Automated Functional Tester
We use automated testing very extensively, from developer written unit tests to fully automated test suites. Sometimes clients balk at the high license fees associated with very full featured commercial testing tools like QTP and this open source testing framework is a great capability to provide our clients with custom software development backed by fully automated testing at a lower price point!
Great job to the AFT team!
Friday, November 26, 2010
Google Gadgets
Wednesday, November 3, 2010
Integrating Content in Customer Workflow Applications
- LexisNexis Litigation Workflow Tools
- D&B Risk Management Tools
- Thomson Reuters Healthcare Clinical Evidence Solutions
Monday, January 11, 2010
Ten Steps to Agile Software Development Process Improvement
In a previous blog entry, I mentioned 10 Steps to Improve Software Development Process. If you read these and pause for a minute, you'll notice that many of these are actually principles of Agile development. I want to expand on a few of these thoughts here.
1. Focus on the top 20% of features:
This is one of the primary drivers of value in Agile custom application developmentpractices. By prioritizing and rank-ordering every item in the feature request backlogs, only the most important ones are developed. By focusing on these Top 20%, you can often satisfy 80% of what end users want, and they can start using the system sooner, to add to the profitability of your company. (If the most important 20% of the features do not add to your company profitability, you should probably cancel the project!)
2. Break things up into smaller projects:
Big projects turn in to huge projects. And miss deadlines. And run over budget. Reading the Standish Group CHAOS report figures on failed IT projects always makes me wonder why more people don't follow the simple advice to "not bite off more than you can chew"?
Again, the Agile concept of smaller, more frequent releases echoes through this item. Getting a working system in to the hands of users is always a good thing - that's why you are implementing the system to begin with! Giving an important and useful portion of functionality early is even better. This has many organization benefits - from psychological ones like proving that the overall program can work and the enthusiasm from a successful launch, to risk mitigation benefits like the ability to redirect spending on an investment after the first release if priorities change, to the practical one that it is a lot easier to notice a project in trouble if a specific release is over budget or late than if fuzzy 'milestones' are being missed.
5. Obtain user feedback
During the implementation process, keep the end users constantly in the picture. Show them early versions of the system - for instance at each Sprint Demonstration or at least with small, frequent releases. Let them give you feedback, and above all, let them change their minds without being punished. Trust your end users; they know what works and what does not - and they are the ones who are going to use the system every day! When they see working features, they may be able to better prioritize other changes or features to be able to complete an important business process or simplify many steps.
7. Mistakes are a way of learning
Remove the blame culture. Let people make mistakes quickly - failing early is much better than missing a delivery date. These so-called mistakes are part of the overall discovery process and will help lead and evolve to the eventual solution. If you blame people for mistakes (picking the wrong feature for a Sprint, not seeing a bug, very bad color choices) they will react to the blame and change their own behavior. Rather than being an active participant in making the project a success, they will become "followers", just doing what they are told - no way to get blamed there - and punching the clock. The good people will hate the blame culture enough to look for work somewhere better.
10. Something has to give in the Iron Pyramid (Quality, Time, Cost, Features)
The old Iron Triangle, which I've always thought should be an Iron Pyramid with Quality as an explicit dimension, still holds true today. In an agile development process you try to "bake in" high Quality by using unit tests, refactoring, engaged people, and frequent review processes. You fix the Time or at least the time cycles - every two weeks you have a Sprint release; and the Cost is essentially fixed based on the size and members of the team. The Features dimension is what gives - and that is where the prioritization comes in. By putting the most important features first, you complete as many in each Sprint as you can and know that you are achieving the most important features at any given time.
Over a longer time horizon, say a Release of 5 to 6 Sprints, you can adjust the Time and Cost dimensions, by letting the process run for more Sprints or deciding that enough Features are ready to stop this project.
I focus on the Iron Pyramid because we all know the truth. If you push people hard enough, they will relax some of the quality checks, and they will get one extra feature in. But you have lowered the quality level - maybe not enough to notice today, but you will pay for it in the future. Whether it is the performance testing that is skipped, leading to a slow web application, or "smelly code" that costs more and more to maintain over time you pay for the lapse in quality.
Thursday, December 24, 2009
Data Visualization and Web 2.0
I love data visualization techniques. From my early days as a operations data analyst and all of my software development career, finding patterns in data and finding an easy way to convey the pattern through a graph or other visualization has always been fun. Working on custom application development projects that provided a picture of how the business was doing, where customers were spending, etc is fun. Now working with our Business Information Services clients to help create innovative approaches to information discovery and data analysis is fun. It really is true that often "a picture is worth 1,000 words."
I vividly recall a stubborn memory leak my team had been trying to track down for several weeks. This was a long time ago, in the days of VB6 COM dll's running inside ASP web pages, and we were pretty sure our code was not leaking. The team had found memory leaks before, and tracked every single one of them down to circular references in our COM object model so that the automatic release never occurred. Historically, it had been easy to find a leak by running a simple load script, executing each page thousands of time in isolation and watch to see which page shows the memory leak. But not this time. We had run the load tests several times and never found the leak. We scanned the code thoroughly. We added as many "set obj = nothing" safety lines as we could. But still the production web servers kept leaking memory, and we were forced to move the automatic restarts of the servers from weekly to daily and hope our band-aid would hold.
One day, I had some down time and decided to see if I could find a correlation between the memory used and what pages were invoked on the production system. An hour or two later, I had pulled all the IIS log files, gotten dumps of memory traces from the systems team and started my analysis. A bit of awk, grep, Access, and other quick and dirty processes later to pull out the data I wanted, adjust for timezones, aggregate hits to cumulative 15 minute buckets and otherwise line up the datasets and I was ready to plot the data.
Instantly, the answer of where the leak was was obvious. The two lines, cumulative hits to a particular URL and memory in use were nearly on top of each other. The correlation jumped out, completely overwhelming the noise of other URL's, pages, etc. This is the power of a good visualization. (Of course, it turned out that the leak was coming from a web services API proxy URL, not a page in the website that everyone had focused on! Since the proxy was not 'in' the website it had been ignored for weeks as the team hunted for the answer.)
Recently, some colleagues and i were discussing what areas Alliance Global Services provides solutions to clients in. This is a pretty broad topic, and we talked about the types of industries we serve (including our focus on Business Information Services), the geographies we serve (mostly North East US, from about Virginia to Boston), the types of services we provide (Custom Software Development,Application Architecture Analysis). And we talked about the easiest way to visualize our coverage areas.
Well, today I had a little downtime before the holidays. So I took a list of our client locations used some simple geocoding tools, and put together two quick samples of mapping in the Web 2.0 world - one using a Yahoo! map through batchgeocode.com and the other using the Google visualization API.
Batchgeocode.com made it very easy to process the first set of data and create a map but then you were stopped. Google was a different story - getting the map running required coding, but then I had full control. To see the first map, visit this blog on Alliance Global Services.
Obviously it's not perfect, but lots of fun for a quick afternoon's work!
Saturday, October 10, 2009
Custom Application Development Best Practices
A colleague of mine read my recent post about code quality and reminded me of the first time he read my musings about code quality, and work habits. He pulled up a team Best Practices document I had written back in 2001 for one of our first .net application development projects, and it's great to see how much is still 100% relevant today.
Work Habits
- Do it now!
If you see a change that you need to make, it's best to do it right now! It is very unlikely you'll get a chance to come back to it later. Plus, other people will begin to code with your bad code/names/etc. and the change that is needed tomorrow will be bigger than the change needed today. If you cannot do it right now, add it to your task list (in Outlook) and set a reminder for tomorrow morning. That way you will have another chance at it.
- Finish one task before starting another!
In general, it will be much better to be 100% complete with one piece of code/functionality than 50% complete with 2 or 3. You are focused, and able to actually deliver a fully working, tested piece of functionality to the client, rather than having 2 or 3 half-broken, half-completed, half-baked ideas floating around. Note, this is a bit in conflict with the previous point, and that is a little intentional.
- Understand an entire function/process before editing it!
When you are about to make a change, enhancement, or bug fix to a routine, read it first. Understand what the code is supposed to do (hopefully explained in the comments at the top) and then how it is (or isn't) actually executing. Making changes with only ½ understanding is guaranteed to cause additional bugs that have to be fixed later.
- Keep it simple!
Keep your code as simple as possible. Keep it maintainable, readable, and easy to debug. Most of the time on our project(s) is spent in QA and debugging. Most of the project is fairly basic - read some data from a database, modify it, and write it back out. If we can do this 80% of the project simply and correctly we can spend more on the interesting fun parts. If we make it complicated and buggy, we will spend 120% of our time fixing this easy section!!
- Optimize at the end, after it works
Write clean and simple code. Understand what your piece of code needs to do, how many users will use it, and how quickly it needs to execute. Plan for this, and code for it. Make all your code work. Then find the slow spots and, if they need it, optimize them.
Eight years later I might choose to add bullets about automation or unit testing or investing time & effort in the software not the design doc but the fundamentals are still the same!
Code Quality and Software Metrics
All good developers have a sense for Good Quality Code. They may call it "clean code" or talk about it is easy to maintain. When code is not good, they talk about "code smells" or "ugly code" or that it is simply "unreadable". Good developers have this sense, even when "good" is not strictly defined and is not measurable. Good developers go out of their way to keep the code that they work on clean, maintainable, and easy to read. Because they know they (or one of their colleagues) will be reading that code sometime in the future trying to figure out what it does and why that darn bug has slipped in to it.
About ten years ago the book The Pragmatic Programmer recounted studies about the effect of visible defects (a broken window) and the way people behaved in terms of caring about their surroundings. The findings apply directly to software quality as the developers on a project typically look to the existing code base to figure out the style of code to implement. In the worst case, a developer will start a new web page, batch job, stored procedure, or other module by simply copying an existing one and hacking up the code to do the new feature. In a typical case a developer will look for references in the existing modules to see "how is that done on this project?" So leaving the "broken windows" in your code base quickly leads to more broken windows as the ugly code is copied or used as a reference and more windows are broken, more graffiti is sprayed on the walls. Once the code base is littered with this low quality code, it's hard for a developer without a very strong internal sense of what is good to tell what is considered good on this project.
By paying attention to the little details, by setting coding standards and making sure people follow them, by requiring intelligent, useful comments in each module describing its purpose the tech lead or the team as a whole sends the message that they want to work in a clean, safe, easy-to-move-around-in environment. They don't want to work in a littered, trashy, graffiti covered neighborhood with broken glass everywhere. They care about the quality of the code.
One of the things I love about using automated tools to check the entire code base, every build (or at least every week) and is that it makes it possible to check for all the little details every build. In most business applications it is rare for every line of every module to be code reviewed. With an automated tool, it's easy - in fact it's automated! Each build produces clear, objective metrics about the quality of the code base.
The benefits of paying attention to little details, of sending the message that you care about the quality of the application code ripple through the entire application maintenance cycle. Developers are more productive, because the code is easy to read and understand. New developers are able to learn the code base more quickly because the code is clean and commented. Fewer errors are made because methods are short and implement a single function in an easily understood way. Global state, side effects, and tricks are not present in the code to cause trouble when their use is not fully understood by the next developer to modify them. Bugs that do slip through or odd corner cases that seem to only occur on one server are easy to track down because the code base has useful logging, has defensive checks, has error handling with relevant error messages, and only uses data values near to where they are populated.
A powerful psychological benefit for good developers when using an automated tool is that the entire team can see the quality metrics - ideally published on a regular basis and taped up on the wall of the team room - and take pride in watching the quality level go up as new features are added andsmelly code is cleaned up. The team can know that not only are they making the application work today - they are making something of good quality that will keep working in the future.
Importance of Business Domain Understanding
As a "dweeb", I love learning about new technologies. New programming languages, new frameworks, new Open Source tools. I love talking with colleagues about how we can use these great new technologies to solve interesting problems Faster, Better, Cheaper in order to make our clients happy. When working with one of our Outsourced Product Development clients or speaking with an Business Information Services prospect about how Hadoop can be used to streamline content processing it sometimes seems like that technology is important in and of itself.
But, sorry to say, the technology is not important by itself. And being an expert in a given technology is not enough. The true value that a "developer" or technology consultant brings is the ability to solve business problems, often by using technology in innovative ways. In order to solve those business problems, you have to understand the business domain.
Having an understanding of the business, the associated terminology, how the business model works, how the business makes a profit, how the proposed solution will add to that profit is critical in making all the little tradeoffs that must be made each day while designing and implementing a solution. In our