Developer is the king

Well I am almost trying to over emphasise the point and putting developer over everyone else but I am sure you know what I mean. Developer has to be one of the most important people to make comfortable and motivated. Unfortunately more often than not in these big organisations they are not considered high up in the value chain.

Most of the business, IT and devops groups within the organisation do lots of things as they think it will help them without considering developer as the critical stakeholder. A few examples:

  • There are occasions in which moving developer machine to VM may impact productivity. So make sure if you decide this then there is absolutely no impact to developers productivity. Last thing you want is dissatisfied developer who is waiting for hours for his build to compile due to network issues
  • Provide them with lots of RAM for the machine so multiple applications which are needed for modern day development can be opened together
  • Make decisions on TDD, BDD or the likes with developers consultation. This includes jUnit strategy as well
  • Documentation: decision to do too little or exhaustive documentation without asking developer on what they want
  • Not defining any recognition platform for the developers
  • Not providing any process where developer can innovate like innovation week or day a week for non project or the likes
  • Not providing career path for them

Please share any thing else you have seen based on your experience.

Advertisements

Don’t forget the basics

There are lots of things which are pretty basic but often overlooked and not done which lead to big issues further in the lifecycle

Six common things which I have seen most of the large organisations not following are

dontforgethebasics

Functionally clustering the application

Functional clustering or autonomy is dividing single monolithic application into multiple independent units, each specific to a functional area. This gives following business benefits:

Reduced time to market achieved as a result of:

  • Less dependencies between the team
  • Less effort for regression testing
  • Less effort for performance testing
  • Less effort for staging deployments

Reduced operational costs achieved as a result of:

  • Effective usage of environments
  • Potential reduction in number of environments

It is particularly complex to achieve this with OOTB products which most of the enterprises uses like IBM WCS, Hybris, Oracle ATG and the likes, but there are always ways around those application.

One of the biggest risks with providing the clustering is that each product team will ask for a separate cluster. This is very risky and can cause negative impact on maintainability and operational costs.

The following questions need to be answered before arriving and creating a new cluster:

  • Is a functionality being introduced functionally independent (and in turn technically independent), reasonably sized and complex enough to warrant a new cluster?. The parameters outlined below could be used to determine the complexity
    • Number of use cases and their flows
    • Number of unique business interfaces required for the cluster
    • Number of unique pages in the wire frames
    • New functionality not dependent on more than 3 clusters
  • Is the frequency of business changes high and independent and can cause disruption/regression in other services bundled? Parameters include
    • Frequency of change requests
    • Size of the change requests
  • Is the business priority of this functionality sufficiently high (e.g. needs to be available to end users even though other clusters are under outage)?
    • Page hits for the functionality
    • Revenue loss in case of outage
  • Is the technical complexity/resource usage of this functionality high?
    • Number of unique third party interactions
    • JVM permanent generation requirements
    • Heap occupied by the objects related to the cluster (including versions)
    • CPU utilisation due to the functionalities in the cluster

Overall recommendation is to always break the single monolithic application into individual units to achieve agility as that will help in scaling engineering for digital

Are we moving towards NOOPS

THE FUTURE OF DEVOPS IS: “NO” + “OPS” = “NOOPS”

Forrester defines NoOps as:

The goal of completely automating the deployment, monitoring, and management of applications and the infrastructure on which they run.
 
What does NoOps consists off ?

  • Bursting into the cloud – Automated bursting in the cloud for big events and sale, thus less operational planning needed
  • Self provisioning – Self provisioning of test, staging and production capacity rather than current manual provisioning
  • Automated testing during deployments
  • Alerts based mindset rather than monitoring based mindset – This doesn’t mean there is no monitoring but once any parameter changes in the monitoring landscape, automatic alerts (email, sms) need to be generated which can be acted upon rather than say 20 people seating on the machine staring at those graphs

In summary, DevOps is collaboration and NoOps is automation.
NoOps is the next stage and evolution of DevOps. You just move the collaboration further in the lifecycle i.e. in the engineering phase. NoOps is not about “Not having Ops”, it is more about eliminating manual low value handoffs.

Secret to successful front end functional testing: Perceptual diff

Perceptual diff is one of the most important concepts and a tool that I have encountered in recent times for successful front end functional testing.

It provides visual regression testing and bridges the important gap in the automated testing for digital by focusing on functional and layout aspect of the page templates unlike most automated test which focuses only integration testing.

Due to large number of page templates in this digital applications, many times we have seen that changes in layout like CSS will work in one template but breaks different template due to some dependency between them. This is only caught in production by end users as manual testing just cannot find that difference if we have to deploy at speed. Thus doing regression testing of layout changes is extremely important part of the automated testing.

At a high level, concept is that you take one screenshot per page template before the release and take one more screenshot after the release and then compare them as image bitmap. Any differences should be analysed for typo, layout errors, wrong formatting, styling issues and so on. All dynamic pages should work as you have taken the screenshot with same set of data before and after, thus any change is due to build deployment.

Perceptual diff considerably increases confidence in the deployment process and thus helps in achieving shorter cycle and greater success.

There are various tools available like google perceptual diff tool (https://github.com/myint/perceptualdiff), Diffux (http://causes.github.io/blog/2014/02/19/visual-diffs-with-diffux) , GNU pdiff , QTP bitmap compare and the likes. Essential features needed in any chosen tool:

  • Capture expected and actual image
  • Compare specific section
  • Deals with dynamic pages
  • Result analysis
  • Reporting
  • Complexity of installation
  • Licenses
  • Knowledge available
  • Time to implement the same

In summary Percetual diff is one of the most important concepts and tool that I have encountered in recent times. It provides visual regression testing and bridges the important gap in the automated testing for digital by focussing on functional and layout aspect of the page templates.

Please share any additional thoughts on other approaches used for front end functional automation.

Beware of talkers

As a number of organisations realise the benefits of devops and continuous delivery, they will be approached by many people claiming they know the topic. They will talk big and sound philosophical on most of the occasions

Most important thing to verify for these individuals are

  1. Where have they implemented this before? Is this for small brands or big enterprises?
  2. What kind of team sizes they have dealt with? We are talking about around 200 engineers.
  3. What financial transactions they had to worry about when they say “Don’t be afraid to fail”?
  4. How many integrated teams they have worked with?
  5. How many legacy applications the main platform has integrated with?
  6. How many 3rd party applications and partners were involved?

 

 

Lastly the most important thing to discover. Ask them to lead one delivery team and take them through the journey which others can learn rather than seating on the fence and giving instructions.

Also normally most of the projects which they will refer they have led will be so big that you have to really find out where and what this specific person did. I have had one of the agile coach walked up to me and said he did a big transformation. When I asked where. He quoted the same client where I was leading. I was shocked as I had never seen him anywhere on the project. After finding more details I figured out that he was on the project for couple of months trying to do something which did not even materialise.

In summary, I am not saying you don’t need help. You certainly need help but please please get the right help from the right person or else you may choose the direction and change the culture which will be difficult to correct in the future.

 

 

 

 

That’s what we did in the last project and it was a success

This is such a common phrase that you hear in all the large organisations. Most of the people in these organisations come from big places and have pre-determined ideas as to how things should work. In any discussions they refer to a past project and say “That’s what we did in the last project and it was a success”

Where they and most of us go wrong is we mix facts with interpretations.

Let’s analyse

Statement – I implemented custom testing automation framework and the project went live on time.

If we break this statement then we have 2 facts a) Implementation of custom testing automation and b) Project going live on time

We don’t know if they are linked or not. Now my interpretation of this is that they are linked. Where as it is absolutely possible that the only biggest issue which they had for go live was this custom testing automation

This is where we need to be very afraid of  “I have done it in the past” people. Every client is different. So we should analyse and differentiate between facts and interpretations.