Friday, October 31, 2014

Step by Step guide to installing Robot Framework

Robot framework has been a little tricky for most folks though now provides an installer for windows, but still it is best to know the detailed steps if you do not wish to use the Installer.

So let's get started -

Installing Robot framework - requires Python installation first as a pre-requisite.

All steps below for Windows OS (32/64 bit)

Step 1. Install Python version 2.7.8 
(supported Python version for Robot Framework version greater than 2.7)
Installer Files:  python-2.7.8 - for 32 bit / python-2.7.8.amd64 - for 64 bit

Ex. Installed at c:\python27\

Update PATH environment variable for Python -


**Verify Python installed and the path set
Run command:
cmd C:\>python -help
Lists python command line options

Step 2. Install setuptools 7.0
(Easily download, build, install, upgrade, and uninstall Python packages)

Run command :
cmd C:\>python
"Installed c:\python27\lib\site-packages\setuptools-7.0-py2.7.egg"

**Verify Setup Tools folder and files available -
setuptools-7.0-py2.7.egg file created under ..\lib\site-packages

Step 3. Install pip
(Python Package Manager)


Run command :
Downloading/unpacking pip
Installing collected packages: pip
Successfully installed pip
Cleaning up...

**Verify pip folder and files available -

NOW you can install ROBOT FRAMEWORK finally !

Option 1 - Install Robot framework using pip

Run command :
C:\Python27\Scripts>pip install robotframework
Downloading/unpacking robotframework
Successfully installed robotframework

**Verify pybot.bat created under C:\Python27\Scripts

** Verify Robot framework installed
C:\>pybot --version
Robot Framework 2.8.6 (Python 2.7.8 on win32)

Option 2 - Install Robot framework using the Windows installer

Pre-requisite - Python installed
Installer Files:  / robotframework-2.8.6.win32.exe

Option 3 - Use the Stand-alone Robot framework JAR package
Robot Framework is also available as a stand-alone robotframework.jar package.
This package contains Jython and thus requires only JVM as a dependency.

Maven Central -

Assumes java installed already
** Verify Java installed
cmd c:>java -version
java version "1.7.0_05"
Java(TM) SE Runtime Environment (build 1.7.0_05-b06)
Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode)

Pre-requisite - Python installed

File: robotframework-2.8.6.jar

Run command:
cmd C:\dev\robotfx\jar>java -jar robotframework-2.8.6.jar

** Verify Robot framework installed ...

robotframework.jar - Robot Framework runner.
Usage: java -jar robotframework.jar [command] [options] [input(s)]

Hurray, you are NOW READY to use the Robot Framework for writing your tests !!

Photo Credit:

Thursday, September 11, 2014

Top 5 Agile project Myths – Smashed !

Catching a glimpse of a snake charmer on a busy Indian metropolis, is a big myth that many foreigners visiting India still cherish (wishful thinking you might say!). But the reality is that snake charming is illegal in India (India Wildlife Act) and has been for a number of years, although snake charmers do still exist and are now an ‘elusive’ sight. But the myth still exists and something similar is the case with the agile projects and the myths surrounding them.

If you take a look at the the annual state of agile surveys for last few years, they have been throwing similar results wrt 'Concerns' about Agile (read as Myths – see below Reference1), reflecting the dismal failure of the agile enthusiasts to be unable to bust the folk tales surrounding the agile projects delivery. This post hopes to therefore Smash the Top 5 agile project myths (popular faolke tales), with a pinch of sugar/salt (take your pick) for added flavour.

Source: Reference 1

Myth1– Agile projects do No Planning

The traditional projects have a Big plan upfront, and planning is highly visible, with a complete plethora of activities, draining the energy for a couple of weeks\months, and resulting in a sometimes scary GANTT chart.

But Agile projects instead focus on Continuous planning, and planning is therefore invisible!
Well you may ask how exactly that is achieved.  The answer lies in the 5 levels of agile planning (at multiple levels) which includes the following -
1.       Product vision plan
2.       Product roadmap plan
3.       Release plan
4.       Iteration (Sprint) plan
5.       Daily plan

Source: Reference 2
These 5 levels of continuous planning allows the agile projects to do continuous risk identification, assessment, and risk burn down via the daily scrum / scrum of scrums among others. The agile project teams are able to visualize these risks using the risk burn down charts, and able to highlight to the stakeholders about the potential upcoming pitfalls.

Myth 2 – Agile projects are Not Predictable and lack Transparency

The traditional projects typically provide an amazingly detailed Big GREEN cover report, with deep RED inside (aka. The Watermelon Report) for those who dare to dig deeper into a project health and find the darker side of reality.

But Agile projects provide Big Visible Dashboards on a real-time basis, which are visible to anyone who cares to look. The charts could include BIG Visible Burn Down charts, Release Burn up charts - showing possibilities of current state and the future trends, allowing trade off decisions to be made early in the project lifecycle. For teams looking at continuous improvements, the CFD (cumulative flow diagram) and other technical metrics indicating the project health and quality indicators complete the project dashboard, thus providing complete transparency and predictability.

Myth 3 –  Agile projects have No Architecture and lack design

The traditional projects typically follow a Big Design Upfront model (BDUF) resulting in the creation of Big Balls of Mud (BBM) and extremely high maintainability costs, primarily due to the high cost of change inherent in the muddy architecture.

But Agile projects tend to let the architecture evolve and follow the emergent design model. Since architecture is indeed the ‘hard to change stuff’, agile projects tend to have as little of that stuff, and take guidance from the enterprise architecture policies and the application architecture constraints.  The design spectrum diagram below highlights the BDUF to cowboy hacking approach and the agile projects fall in the middle.

Source: Reference3

The Ambysoft surveys indicate that 77% agile projects did perform high level initial architecture envisioning and are focussed on reducing technical debt more mercilessly, using the engineering practices like TDD, continuous refactoring and harvesting patterns for reuse.

Myth 4 –  Agile projects have No Documentation

The traditional projects produce Big Bulky Documentation (BBD) for every stage in the software life cycle, most of which is internal and tends to get stale quickly, smelling of wasted time and efforts.

But Agile projects tend to write just enough documentation which is fit for the purpose for the team/project needs. The agile documentation could take many forms and occur at multiple stages in the project lifecycle. The collaborative approach of the agile teams includes using - wiki’s /word doc / simple sticky notes/ big physical boards – to communicate and document lightly the project story including the technical design and requirements elicitation discussions.

Source: Reference 4

Myth 5 –  Agile projects have No engineering discipline

The traditional projects, which produce big balls of mud, have a tendency to end up forcing the technical excellence as a post release activity (“let’s clean up the code now that we have released to production” – is a common pattern).

But Agile projects always pay continuous attention to technical excellence, since agile teams know that a good design enhances agility and helps in the longer term maintenance. The agile teams accept change and have much simpler design thanks to the engineering practices, like following team coding standards, having collective code ownership, writing unit tests, practising continuous Integration and doing pair programming, while following test driven development.

Source: Reference 5

The Ambysoft surveys clearly indicate that the agile teams follow high engineering discipline, since     - 72 % - write unit tests, 55% have coding standards, 58% do CI,  47% do refactoring,  38% do TDD

....and so the search continues for the elusive snake charmer. Let me know if you have seen one recently...

  1. 8th Annual State of Agile Survey
  2. Mike Cohn, Agile Estimating and Planning, Informit, 2009
  3. IBM Evolutionary Architecture and emergent design
  4. Agile Modelling
  5. Agilitrix
  6. AmbySoft Surveys
  7. Photo credit

Sunday, June 1, 2014

Mr. Product Manager: Are you ready for the brave new Agile world ?

This was an interesting question, which got me thinking to rant out on the state of product managers in the Scrum India meetup in 2011.

The fact is that after couple of years later, I still see that the product management is not ready and not ready to embrace the new world. So here's my wake up call again for them (from my archives) and possibly make atleast some of them embrace the new agile world now. You can hear my rant (pecha kucha style) in the video below. Feel free to drop me a note on your experiences.

Thursday, May 22, 2014

Agile Balanced Scorecard - Does it exist ?

It is indeed a difficult question to answer!  The "Agile" Balanced Scorecard may or may not exist today (the literature published is pretty scant on this), but if you are looking for developing this scorecard or modifying your existing Balanced Scorecard for your organization, then you may want to watch my video below in the Agile India Kerala 2013 conference titled - Balanced Scorecard for the Agile Enterprise

Hope you enjoy the video, and leave me any feedback \ comments.

Saturday, September 7, 2013

3 Simple steps to build your Continuous Delivery Dashboard

Continuous Delivery is gaining traction now, but it is never easy to get funding :-|| But using Lean Value Stream Maps you can now showcase tangible efficiency gains by following these 3 simple steps to build your Continuous Delivery dashboard.

In uncertain times, people always struggle with executive funding for resources (infrastructure asset purchases and/or dedicated people). This is where I have borrowed the Lean Value Stream maps (VSM) to showcase visible dashboards focused on process efficiency gains, resulting in hard $$$ savings, and help win executives approval, for funding the various activities under the Continuous Delivery initiatives.

Here is a basic definition for Continuous Delivery, which is a set of practices and  principles aimed at, building, testing and releasing software faster and more frequently. The practices would typically include configuration management, continuous integration, automated testing, deployment automation, build pipelines and an agile team delivering frequent releases.

From the Lean camp, the Value Stream Map (VSM) is a lean manufacturing technique used to analyze the flow of materials and information required to bring a product or service to a consumer.
But for our needs and this scenario specifically, we are only focusing on the value stream map for the engineering organization delivery to UAT stage (which may include the deployment operations team).

The primary VSM metric to measure is the Process Cycle Efficiency where the cycle time is the total time measured from the developer checkin to the deployment and testing of a ‘candidate’ release on a UAT test machine or similar or the complete time it take to do a ‘NULL Release’(“If we changed one line of code in our application (or system), how long would it take us to deploy it into production using our regular release process?”)

Formula for Process Cycle Efficiency (PCE)
Process Cycle Efficiency (PCE)  = (Value Added Time / Cycle Time)
where Value Added Time – time spent producing a product or service for which customer is willing to pay for.
Cycle Time – Total time from start to finish including the Value Added and Non Value Added time (defined as time spent in setting up systems, equipment turnover, handoffs or simply WASTE’s in the production process)

So the 3 simple steps to build your Dashboard are -
  1. Build Process Cycle Efficiency (PCE)  for Current State
  2. Build Process Cycle Efficiency (PCE)  for Target State
  3. Calculate and Track Efficiency Gains

Step 1 – Build Process Cycle Efficiency for Current State

Let us assume a simplified typical 3 stage process which includes Build – Deploy – Test activities. Each of these activities may include multiple steps and can be drilled down, but for simplicity we keep this at a high level. A typical PCE Value Stream would appear as below –

 and the typical timelines for each of the activities can be depicted as –

All data in MINUTES
Time Taken
Build Wait Time*
Build Execution Time
Deploy Wait Time*
Deployment Execution Time
Test Wait Time*
Test Execution Time
Total Cycle Time
Value Added Time
V1 =X1+Y1+Z1
Process Cycle Efficiency
PCE1 = V1/TT1 %

* Wait Time = Non Value added activities, which could include handoffs, signoffs, approvals, hardware latency, software latency etc..

Step 2 – Build Process Cycle Efficiency for Target State

As you implement the core practices of Continuous Delivery- Continuous Integration, Automated Testing, Continuous Deployment, Build Pipelines etc., the Target PCE Value Stream would appear as below –

and the typical timelines for each of the activities can be depicted as below –

All data in MINUTES
Time Taken
Build Wait Time*
Build Execution Time
Deploy Wait Time*
Deployment Execution Time
Test Wait Time*
Test Execution Time
Total Cycle Time
Value Added Time

Process Cycle Efficiency

*Wait Time – Reduction in wait times as a result of following the core practices or waste removal for the non value added activities.

Step 3 - Calculate and Track Efficiency Gains

The PCE can be depicted on a monthly or quarterly graph to show progress, as shown in the example below. Based on the financial data, costs can be further assigned highlighting the time savings and the resultant cost savings per quarter.
Therefore Total Cycle time Savings achieved are = TT2-TT1=  980 – 220 = 760 minutes ~ 
13 hours SAVED = $$$ Savings

          Continuous Delivery Progress Dashboard

Assuming that the above scenario was for a single platform and single build, but these savings could increase exponentially, based on the number of parallel builds/deployments, across multiple platforms in a large scale enterprise product.

Let me know if this helps you move forward in your continuous delivery journey and if you have similar experiences with your executives and what was your solution. Till then you may wish to mull the below quote, while you try to Sell the benefits and joy’s of Continuous Delivery!

I have never worked a day in my life without selling. If I believe in something, I sell it, and I sell it hard. -Estée Lauder`

Saturday, August 10, 2013

Is your engineering team leaning to "Heaven" or "Hell" ?

Listening to the legendary Eagles, Hell Freezes Over album, it always touches a high point for me with the lyrics -

I heard the mission bell
And I was thinking to myself
'This could be heaven or this could be Hell

Well the mission for the engineering team(s) is to provide a continuous flow of business value to the stakeholders, with stable teams working at a sustainable pace, while improving their technical excellence daily.

But do we really know if we are any closer to achieving this mission or are we simply stuck and wondering if we are holed up and have no way out ?

So to find the answer, take this 20 Questions survey below and SCORE your engineering team(s) to check your WAY,  and find if you are indeed leaning towards Heaven or Hell ?

For each question below, use this RATINGS SCALE below to assign a score to your response -
1 – 4  : No , we do not ….……you are possibly closer to HELL than you think ~~
5 – 7  : we try and succeed mostly… are moving closer to Heaven
8 – 10 : we do this almost every time and love it are reaching HEAVEN-ly Bliss !!


1. Are using ‘High Maturity’ Engineering Practices and Tools ?
2. Do you have Sponsors commitment to Technical Excellence ?
3. Do you have Collective Team Ownership ?
4. Do you have Stable Team(s) ?

Agile Development Process :

5. Do you have Business Value driven requirements from Stakeholders ?
6. Does your Product Manager, Business Sponsor and Architect\lead do collaborative product discovery ?
7. Do you capture requirements using Acceptance Tests (possibly with Automated frameworks) ?
8. Do you follow Evolutionary Design for emergent requirements ?
9. Do you follow pair programming development + Test Driven Development frameworks ?
10. Do you have a culture of Continuous Refactoring ?
11. Do you have limited Work in Progress in your sprints ?
12. Do you have Pre-Submit Development Checks ? include writing Unit Tests + peer Code Reviews
13. Do you have Continuous Integration for Daily\Weekly builds, followed by Post Submit Checks including Static Analysis, Code Coverage, and Technical Debt metrics published to everyone ?
14. Do you do Continuous Testing at all levels (unit, component, feature, system, including non functional) ?
15. Do you use Feature toggling and Release Trains (based on Product Management launch decisions) ?

Continuous Feedback , Demos, Collaboration, Insights and Rewards :

16. Do you share for you  products, solutions: Continuous Alpha, Beta builds - company-wide at a common location for access by everyone ?
17. Do you have Continuous Deployment for Hosted (Saas) solutions (DevOps) ?
18. Do you have Monthly Open Spaces for Company-wide Demos ?
19. Do you have Quarter end Major Launch Demo’s, Customer experience Talks for insights / feedback ?
20. Do you follow Peer Ratings and Portfolio Rewards for your teams ?

Calculate your FINAL SCORE !!!

Hope your final score helps you to review your current state and push forward to the Heavenly bliss…!

Till then –
'Relax' said the night man,
'We are programmed to receive.
You can check out any time you like,
But you can never leave!'

Sunday, August 4, 2013

Enterprise Customer Feedback : Lost Horizon or Last Horizon ?

Are your enterprise customers giving you the feedback when your engineering team wants it or do you lose your delivery heartbeats with late or non existent customer feedback ? To explore this further, let's rewind while fast forwarding a little.

Today the future of Business and IT is to GO Digital, with an increased need for the CMO and CIO to coordinate and deliver value to their stakeholders. But the 2013 Gartner study for CIO's indicates that this customer value delivery gap is still a major challenge with "the vast majority of IT organizations need to address fundamental gaps in their performance".

Thought it appears that the IT world has in the last decade or so made some progress with this metric of delivering customer value, (quote below) and have started to learn and some are now able to Build-IT-RIGHT now unlike the past, but still there is a long way to go....(68% still feel that customer value is not delivered by IT)

32% of the respondents felt that delivering customer value was most valued by their organization’s executives for the delivery of Scrum-based projects 
Source: State of Scrum 2013, Scrum Alliance

This intense focus on the ability to Build-IT-RIGHT is primarily thanks to the force of iterative agile delivery model with short iterations combined with some XP practices of pair programming, test driven development, and continuous delivery (including continuous integration and continuous deployment) .

But Build-IT-RIGHT assumes that the "closed loop" will always have a customer onsite, ready to provide instant feedback and ignores the dark reality of the real world scenarios. But in my experience most IT teams implementing agile methodologies today face one or more of these situations :
  1. No colocated Customer with the IT team
  2. No colocated Customer representative with the IT team (Product Owners are just a bad substitute!)
  3. Customer feedback is non existent
  4. Customer feedback rarely  - once in a year via Customer Advisory Board or similar
  5. Customer feedback has long cycles typically more than 6 months 
The naysayers will indeed argue for the Cloud based application deployments which may be a rising trend with a tepid growth but the majority of the world is still run by enterprise applications hosted internally by the customer IT teams, and hence have a LOOONG phase gate approach to accepting new versions. The old IT world mindset still rules with mistrust and high risk as key factors for accepting the status-quo.

For the few lucky organizations, the new world mindset allows them to embrace the Lean Startup mode, with A/B tests as rapid feedback, Dual Scrum tracks and Continuous Delivery models. But this closing of the feedback loop is still a long way to go mainstream. Till then we are close to  there but still missing out on the Last Horizon to achieving IT and Business agility.

In Summary, with the IT teams as both a consumer and a provider of services to the business, this new mindset is an opportunity for the CIO's to conquer this LOST Horizon !

What's your experience on customer feedback (especially enterprise customers) ? Have you captured this LAST Horizon ? or are you losing the horizon ?