Sytuacja kobiet w IT w 2024 roku
20.08.202012 min
Relativity

Billy Bafia, Szymon SmykałaRelativity

Test‌ ‌Driven‌ ‌Development‌‌ ‌-‌ ‌obstacles and best practices

Learn more about Test‌ ‌Driven‌ ‌Development‌ ‌(TDD)‌ - find out why it is challenging to master and learn some of the best practices.

Test‌ ‌Driven‌ ‌Development‌‌ ‌-‌ ‌obstacles and best practices

Test Driven Development (TDD) is a practice that is widely praised and recommended by the engineering community, but hardly the de-facto standard utilized on a daily basis in the software development workflow. The practice proposes that when we as engineers need to make a code change or develop something new, we start by writing the most fundamental test cases to help drive the creation and formation of the actual production code. This practice helps ensure that we only produce what is necessary, and throughout that production, we keep in mind best practices such as accessibility, design, and security. So, why is this practice rarely the primary preference for development?

Let’s start by recalling the practice of TDD and the fundamental rules to keep in mind as outlined by Robert “Uncle Bob” Martin. TDD is defined as “a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the code is improved so that the tests pass. This is opposed to software development that allows code to be added that is not proven to meet requirements” (Test-driven development).

Breaking this definition down, we utilize this process as part of the agile development methodology to confidently and rapidly make changes to our product and quickly produce deliverables. We build this confidence to rapidly deploy our code and packages by focusing on developing test code first from the specific requirements we have received. By developing the test code first, we ensure that we are only writing production code which passes the tests we have created, which in turn means that we are only writing what is necessary based on our requirements. To hold true to this process, Uncle Bob developed the following three strict rules which we know as Martin’s (2014) Three Laws of TDD:

  1. You must write a failing test before you write any production code.
  2. You must not write more of a test than is sufficient to fail, or fail to compile.
  3. You must not write more production code than is sufficient to make the currently failing test pass.


The first rule is crucial since it forces developers away from creating any type of production code, even if it is scaffolding and skeleton code. The second rule ensures we take a piecemeal approach to fulfilling our requirements and developing code that is modular and meets SOLID standards. Finally, by following the third rule, we are easily and quickly able to identify issues with our changes and revert them to investigate issues without worrying about too much time and work being lost. We do all of this while remembering to refactor along the way and redesign our approach, if necessary, without failing any tests. So, how realistic is it to stick to these rules?

Let’s start with some business cases because as developers we usually must align with the needs of the business. In the general best-case scenario, as a team, we negotiate ahead of time the project scope and timelines. With a timeline negotiated, TDD is likely an option as the development team has considered a test plan and has most requirements strictly defined. In this particular case, the business has agreed to invest time developing test cases and not putting the quality of the code at risk. As many companies start moving toward cloud-based systems and continuous integration models, test coverage from unit all the way up to system is key for successful and sustainable continuous integration and ultimately continuous delivery. While this is the goal, in many cases we are required to change our plan based on new requirements and dependencies which may alter our intended design as part of our agile culture. Teams may no longer have the luxury of fulfilling their entire test plan and might have to forego the lengthy and tedious process that is TDD especially in situations where teams or individuals have not fully matured to practice this development process (Shore & Warden, 2010).

While it’s suggested this process usually has a learning-curve of two to three months, this is typically the estimate for simple, small scale projects that do not involve complex legacy systems or user interface heavy feature development. That learning-curve timeframe is usually equivalent to the project timeframe, based on my current professional experience, but throughout the entire process, teams are typically bombarded with client defects, internal incidents, and minor context switching.  

A separate case is investing test coverage for legacy systems which may not already have coverage or exhibit poor coverage for when teams need to redesign. As more companies evolve to cloud-based systems, this evolution must consider not only integrating with the cloud provider, but also reworking the system to be cloud native while maintaining the intended original requirements. Without test cases or limited test coverage to understand the code, one must invest the time to fully understand the product prior to migrations and, in many cases, add test coverage to help with that learning process. At Relativity where we are migrating our legacy product to function well with cloud systems, some teams took the approach of writing test code first to ensure the initial intent of the legacy code was maintained prior to making any major migrations and refactors. These test cases started off as basic as ensuring the front-end was available and APIs were properly stood up via full Selenium UI and NUnit system tests to eventually more sophisticated end-to-end tests like processing a document and making sure the metadata was available for review, still however via Selenium UI and NUnit system tests.

So, we decide to start a project and fully immerse ourselves into the TDD methodology, but how successful are we going to be in ensuring that our product is free of major bugs and properly developed to meet all of the requirements? Even before we start coding our first test, we have to digest the requirements and prepare a plan or a set of test cases that cover all the branches, edges and golden workflows. Depending on the complexity of the project and how skilled the team is in utilizing various testing methodologies, this could require an extensive amount of upfront work to develop a plan.

The first of a few strategies that we will briefly look at is cause-effect graphing. Simply put, this strategy allows for requirements to be translated into a set of nodes which are connected by the actions taken on a system, the inputs or causes, to result in the desired outputs, the effects. These sets of connected nodes are then translated into a table for developers to prepare test cases of these states. The positives of using cause-effect graphing are generally that requirements can sometimes be automatically parsed and translated for developers to prepare test cases, and that the resulting decision table helps substantially reduce the number of test cases by highlighting areas of overlap or areas where combinations cannot occur. This strategy works well for simple systems where feedback is limited and could be helpful in areas where automation is sufficient and proven to consistently produce good tests cases; however, the graph’s limitations of not being able to represent systems which include time delays, iterations, or loops to interoperate with other systems may not make it practical to consistently rely on for all projects (Pfleeger, 1991).

Building on our knowledge of cause-effect graphing, the next strategy to consider is state-based testing. Like our previous strategy, we define a set of nodes and transitions between those nodes being events as inputs and actions as outputs to represent our entire system as a state machine abstraction. Unlike cause-effect graphs, we can define the system to represent all components such as time delays, iterations, and loops to ultimately allow for understanding every aspect: where an interaction begins; the initial state; the sequences of interactions allowed; and the resultant or final states. This is already an improvement as we can define complex systems and retain those definitions in the form of UML diagrams, such as a state transition diagram, which in my opinion is already a helpful starting point to quickly digesting the requirements of a system as long as developers are forced to also maintain those diagrams. The system is usually predictable in that a state should always be reachable, and in circumstances where a dead or magic state is encountered, it usually signifies an error and allows the designer to rethink their approach ahead of implementation. When we achieve a minimal state machine, where there exist no redundant states, we can quantify how expensive our testing approach may become by analyzing the various state-based testing strategies. 

In general, most state-based testing will cost developers at least kn, where k is the number of events and n is the number of states. However, the different strategies allow for developers to analyze the amount of risk they want to incur as part of their design, so that in events where tradeoffs need to be made, teams can reduce their testing strategy to about half, kn/2, with the understanding of where they would be losing confidence. We see that state-based testing already has some great benefits over cause-effect graphing, especially for systems with complex control requirements, but it is not cheap (Binder, 2004).

We have touched on two strategies both with their own pros and cons useful for various requirements, but there are many other families of test strategies that teams can consider such as path-based and data flow testing.

We will not go into tools like CodeScene and SonarQube that allow for automated evaluation such as static analysis and symbolic evaluation. Generally, these types of automated checks should be integrated by default since they provide valuable input at generally low cost to the developer. While these strategies can still be used manually, it is best to leave it to the machines as non-algorithmic computations make verifying the design excessively more complex (Howden, 1987).

There is a lot that goes into TDD: business planning, requirements digestion, test case creation. We have briefly discussed some cases where TDD is a great option for a team and some cases where TDD might not be an attractive option. Going through these scenarios, we have seen that this thought process primarily drives decision making based on the intention of the design of the product or end goal. TDD enforces engineers to certain architectural standards which is why many practitioners have evolved TDD to be considered more as Test Driven Design, or in some older cases Test First Design (Martin, 2014). 

Like many other decisions in your project, engineers typically choose a design that most adequately suits the development environment and skill level. It does not necessarily mean that a certain design is poor and should not be the de facto, but rather that as an application developer and designer, these factors ultimately may make your application more or less testable. Test Driven Design focuses on choosing patterns and principles suitable for your application, so that engineers can easily do TDD, but you may in other aspects be compromising the overall architectural design. As with design decisions in general, engineers have to weigh the pros and cons to see which design will deliver the most appropriate result. This may include revising the requirements, test plan and integration with other systems to ultimately decide the design of your entire project as long as the main result of the decision is to help engineers focus and keep things simple. TDD strictly as a discipline has been known to harm architectures and designs because when focusing strictly on the discipline, engineers find themselves entrenched in trying to find ways to make the code testable rather than relying on an approachable, clean, and flexible design for the entire project (Martin, 2017). When you think of it more broadly as a design framework, you tend to prepare yourself for flexibility away from the trenches of strict TDD.

I believe that while TDD is a phenomenal discipline that enforces engineers to approach their code a different way, I would urge engineers to focus more on Test Driven Design, so that the project holistically takes into account not only the requirements, but the proof that those requirements are consistently met and a flexible design to keep teams focused. TDD is easy since there are rules in place outlining the discipline, but to design a system to be testable based on the needs of the business is a separate approach to focusing on TDD since we are thinking early on in the project of how and what the team will be doing. Test Driven Design encourages teams to not only consider production design decisions, but also test designs and strategies that could help the project move along more efficiently and effectively. Test Driven Design helps engineers spot issues with testability early on to help weigh the pros and cons of a strict TDD disciple. Teams may decide that they approach testing a component a separate way based on their desired ultimate architecture. This does not mean that teams cannot still do TDD, but at least later on teams are not trying to redesign their approach or force their code to work just so that they can stick to the discipline of TDD.

In conclusion, TDD is a discipline that should be used wisely and consistently for designs that allow it. Teams should evolve to incorporate Test Driven Design into their initial planning not so much to allow for TDD, but rather to understand holistically the depth of their testing plans and how adequately the project will allow for TDD. We have seen that there are different approaches to preparing test cases and different situations that may not allow for teams to invest the time of TDD, but the important part of all this is that by using Test Driven Design, you may be able to prepare ahead of any issues or roadblocks that you encounter later and not need to change paths drastically for a more accurate and timely project plan.  

Test Driven Design also ideally will help you avoid the pitfalls of TDD that tend to lead to poorer architectures and other pitfalls associated with forcing your code to be testable just because you want it to be. While this may all make TDD seem expensive, eventually as the team becomes more familiar with Test Driven Design and supported TDD architectures, the cost decreases as you gradually avoid needing to rewrite code for testability or debug issues due to a lack of proper test coverage. Use TDD, but wisely while focusing mainly on the overall goal of your design, focus and simplicity. Ensure teams are well prepared and that an arsenal of testing strategies exists, so that teams have the knowledge for the best design options possible for a smooth and successful deployment of your application.


References

Binder, R. V. (2004). State Machines. In Testing Object-Oriented Systems: Models, Patterns, and Tools (pp. 175-268). Boston, MA: Addison-Wesley.
Howden, W. E. (1987). Symbolic Evaluation of Nonalgebraic Programs. In Functional Program Testing and Analysis. New York, NY: McGraw-Hill.
Jorgensen, P. C. (2014). Software Testing: A Craftsman's Approach (4th ed.). Boca Raton, FL: Auerbach Publications.
Martin, R. C. (2017, March 03). TDD Harms Architecture. Retrieved July 10, 2020, from The Clean Code Blog
Martin, R. C. (2014, December 17). The Cycles of TDD. Retrieved July 10, 2020, from The Clean Code Blog
Pfleeger, S. L. (1991). Cause-Effect Graphing. In Software Engineering: The Production of Quality Software. New York, NY: Macmillan.
Shore, J., & Warden, S. (2010, March 26). The Art of Agile Development: Test-Driven Development. Retrieved June 07, 2020, from The Art of Agile Development: Test-Driven Development
Taman, M. (2019, May 10). Test-Driven Development: Really, It's a Design Technique. Retrieved May 30, 2020, from infog
Test-driven development. (2020, April 9). Retrieved May 30, 2020, from Wikipedia

<p>Loading...</p>