<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Mario Cervera's Blog]]></title><description><![CDATA[Software craftsman. I write about Software Engineering (clean code, design, testing, ...) and also about Algorithms & Data Structures.]]></description><link>https://mariocervera.com</link><generator>RSS for Node</generator><lastBuildDate>Mon, 13 Apr 2026 22:02:37 GMT</lastBuildDate><atom:link href="https://mariocervera.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Is Software Engineering Really Dead in the AI era?]]></title><description><![CDATA[Is Software Engineering really dead in the Artificial Intelligence (AI) era? This question is both provocative and widely debated. It is generating a great deal of discussion on social media and among software developers in general. In this post, I s...]]></description><link>https://mariocervera.com/is-software-engineering-really-dead</link><guid isPermaLink="true">https://mariocervera.com/is-software-engineering-really-dead</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[clean code]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Tue, 03 Feb 2026 23:17:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770160479653/adb6a181-57a7-4bcd-9f35-ee3927e4a549.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Is Software Engineering really dead in the Artificial Intelligence (AI) era? This question is both provocative and widely debated. It is generating a great deal of discussion on social media and among software developers in general. In this post, I share my perspective on the issue.</p>
<p>Let’s start at the beginning.</p>
<h2 id="heading-the-prediction">The prediction</h2>
<p>If you work as a software engineer and regularly browse social media platforms such as LinkedIn or X, you will most likely come across an unsettling prediction:</p>
<blockquote>
<p>Software Engineering will be dead in less than one year.</p>
</blockquote>
<p>It is a harsh statement, but it reflects a belief that is in fact widespread. If you think about it for a moment, there is some logic to it. State-of-the-art <strong>AI coding assistants</strong>, such as Claude Code, <strong>are incredibly powerful</strong>. They have reached a level of capability that would have been hard to believe just a few months ago.</p>
<p>This technological reality has led many people to believe that we no longer need programming languages. We can describe software behavior in plain English, and, if the result is not what we expect, we can simply refine our words until the software behaves as we want. In this view, we do not need software engineers. Anyone can build software because coding is automated. AI has raised the level of abstraction to a point where natural language is enough to tell computers what to do.</p>
<h2 id="heading-where-in-my-opinion-the-prediction-goes-wrong">Where<strong>, in my opinion,</strong> the prediction goes wrong</h2>
<p>The prediction rests on two premises:</p>
<ol>
<li><p>From natural language, AI coding assistants can generate source code (in a high-level language such as Python, Kotlin or C++).</p>
</li>
<li><p>From source code, compilers can generate assembly code (or a binary executable representation).</p>
</li>
</ol>
<p>While these two statements are true, a subtle detail causes the prediction to break down: the first premise is not a deterministic transformation. Given the same sentences in natural language, there is no guarantee that we will obtain the same source code. This is because AI coding assistants do not follow a fixed procedure. They rely on probability distributions: they make choices based on what seems most likely to be correct according to patterns learned from millions of examples in their training data.</p>
<p>This is more relevant than it may seem. It means that <strong>we lose control</strong> of the software that we are building. Let’s illustrate it by means of an example.</p>
<h2 id="heading-a-realistic-example">A realistic example</h2>
<p>Imagine a team that is responsible for a live production system. None of the team members are software engineers, so they lack technical knowledge. Instead, they maintain the system exclusively using AI. Suddenly, the system starts malfunctioning due to a critical bug. The team tries to fix it using natural language, but every attempt fails. What can the team do? The only option is to dig into the source code, but they can’t because they are not software engineers.</p>
<p>The takeaway is that a probabilistic tool cannot be held responsible for a live system, especially if it is safety-critical. <strong>There will be times</strong> when we need control: moments <strong>that demand certainty</strong>, not probability.</p>
<p>Those who claim that software engineering is dead would not trust the software that runs their company entirely to AI. They’d put humans who deeply know the system in charge.</p>
<blockquote>
<p>Building a throwaway app quickly is one thing. Maintenance of a product with long-term profitability is another.</p>
</blockquote>
<p>When software must stand the test of time, it must be built in a way that others can maintain it long after you are gone. It must be built with quality attributes (such as testability and code readability) in mind, and these attributes remain primarily the responsibility of humans, not machines.</p>
<h2 id="heading-an-understandable-trend">An understandable trend</h2>
<p>While there is no evidence that source code will lose its relevance or that professional software can be entirely developed and maintained by AI, I find the trend understandable.</p>
<p>My impression is that, in the software industry, many developers (and even more non-technical roles such as managers or salespeople) have never cared much about code and its quality. Technical practices such as the ones proposed by eXtreme Programming (XP) are the exception, not the norm. Any software engineer with a few years of experience has likely worked on a legacy codebase that is difficult to understand and change, or on a team that is unwilling to pair-program, write automated tests, or refactor code.</p>
<p>These are the people who claim that Software Engineering is dead. They have never been enthusiastic about code quality, so why would they start now? This is the perfect opportunity to ignore engineering practices without guilt.</p>
<p>In the era of AI, the challenge for us, software engineers, is to show that code quality (and the practices that enable it) matter today as much as they ever have, and perhaps even more.</p>
<h2 id="heading-the-two-audiences-of-code">The two audiences of code</h2>
<p>Many people argue that we should stop paying attention to AI-generated code. After all, software engineering is dead, right?</p>
<p>I strongly advice the opposite.</p>
<p>In the age of AI, machines not only execute code; they care about meaning and intent, which makes readability relevant for machines as well. Does it make sense to stop reading code and caring about its quality now that code has <strong>two audiences: humans and machines</strong>?</p>
<p>AI agents do not inherently produce well-crafted code. They are not optimized for craftsmanship during their training. They may miss abstractions, introduce duplication, or choose inappropriate variable names, for example. It is our responsibility to address these issues. Continuous refactoring is the only way to improve code quality over time, and higher-quality code reduces the likelihood that AI agents misinterpret your intentions when reading code, making AI agents less likely to make mistakes.</p>
<p>Continuous refactoring is also crucial at the level of system architecture. There is a limit to how much a person can hold in mind at once, and the same applies to AI agents. The more you ask an AI agent to focus on, the less attention it can give to any individual part. Context should be minimized to make code easier for AI agents to understand. This is why separation of concerns is such an important design principle. Just as higher modularity reduces cognitive load for humans, it also reduces contexts for AI agents.</p>
<p>The problem is that AI agents tend to struggle at larger scales because they are better suited to local context, not global reasoning. <strong>This leaves software architecture primarily a human-driven activity</strong>.</p>
<h2 id="heading-engineering-practices">Engineering practices</h2>
<p>The relevance of software quality strongly suggests that Software Engineering is far from dead, but we can make the argument even stronger. <strong>Software Engineering is not only about code</strong>. <strong>It encompasses all the engineering practices that enable quality and sustainable delivery of value to customers</strong>. Unsurprisingly, the engineering practices that have proven effective for decades seem to be the perfect fit for this new era of AI-assisted development.</p>
<p>What role do these practices play?</p>
<p>AI agents have undoubtedly increased coding speed, but this does not necessarily translate into delivering more value to users in less time. Ultimately, it depends on the team.</p>
<ul>
<li><p>In a team where code is not frequently integrated into the main branch, increasing coding speed will lead to more merge conflicts, and will make these conflicts bigger and harder to resolve.</p>
</li>
<li><p>In a team where deployments are not automated, increasing coding speed will create more work for operation teams.</p>
</li>
<li><p>In a team where testing is treated as an after-development phase, increasing coding speed will create more work for QA teams and more bugs to address later, when they are more expensive to fix.</p>
</li>
<li><p>In a team where collaboration is weak, increasing coding speed will result in more rework and misalignment.</p>
</li>
</ul>
<p>In such a team, coding is not the bottleneck; the way the team works is. Optimizing a non-bottleneck in a system that has real bottlenecks will not make the system more efficient. It will probably make it less so.</p>
<p><strong>The key to being effective with AI coding assistants is to be effective without them</strong> [1]. A team that works in small batches and follows time-tested practices such as refactoring and continuous delivery will benefit from a huge boost in productivity when using AI tools. A team that lacks discipline may feel an initial surge in coding speed, but it will lose traction with each passing day, as technical and comprehension debts accumulate.</p>
<h2 id="heading-healthy-and-productive-teams">Healthy and productive teams</h2>
<p>In addition to engineering practices, the composition of teams within a software company can also affect how effectively the company leverages AI.</p>
<p>In a company where teams are organized around specific skill sets (such as QA, front-end, business-analysts, DevOps, etc.), customers will not receive value any faster because delivering software requires extensive inter-team communication. In this context, coding is clearly not the bottleneck, so accelerating it will have little positive impact.</p>
<p>Note that the same bottlenecks can arise within a team if members are overly specialized. To fully benefit from AI, teams need to be cross-functional and team members must maintain healthy and effective communication.</p>
<p>Last but not least, the team requires <strong>autonomy</strong>. In command-and-control hierarchies, where decision-making is slow and managers do not trust the team, increasing coding speed will have little effect because the team will spend most of its time waiting for approvals. The team must be empowered to make decisions on the spot, as they are needed, in the same way as continuous activities such as code integration, testing, or refactoring.</p>
<h2 id="heading-conclusions">Conclusions</h2>
<p>Contrary to what many believe, Software Engineering is far from dead. Coding is not fully automated, and even if it were, Software Engineering is not only about code. Teams must be effective without AI to fully benefit from AI coding assistants. Engineering practices such as continuous integration and refactoring, which promote clean and readable code, are essential to ensuring that AI adds value, rather than becoming a source of technical and comprehension debts.</p>
<p>There is no denying that state-of-the-art AI coding assistants are remarkably capable today, but the main implication is simply that software engineers have an incredibly powerful tool at their disposal.</p>
<p>Coding has changed forever. Software engineers must adapt to this new era, just as they have countless times before, at historical milestones such as the introduction of the first compilers and high-level languages, or the rise of the World Wide Web. This time is no different.</p>
<h2 id="heading-references">References</h2>
<p>[1] Jason Gorman, <a target="_blank" href="https://codemanship.wordpress.com/2025/10/30/the-ai-ready-software-developer-index/">The AI-Ready Software Developer</a> (2025)</p>
]]></content:encoded></item><item><title><![CDATA[Key Challenges of AI-Assisted Software Engineering]]></title><description><![CDATA[Recently, I published a post on accelerating software engineering with the help of Artificial Intelligence (AI). In that post, I shared my team’s hands-on experience with AI agents, exploring four scenarios where we could easily achieve a significant...]]></description><link>https://mariocervera.com/key-challenges-ai-assisted-software-engineering</link><guid isPermaLink="true">https://mariocervera.com/key-challenges-ai-assisted-software-engineering</guid><category><![CDATA[AI]]></category><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[clean code]]></category><category><![CDATA[llm]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Tue, 06 Jan 2026 23:29:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767741820175/10db1b69-d7dd-4bfd-9897-4d2e16b3be01.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I published a <a target="_blank" href="https://mariocervera.com/leveraging-ai-accelerate-software-engineering">post</a> on accelerating software engineering with the help of Artificial Intelligence (AI). In that post, I shared my team’s hands-on experience with AI agents, exploring four scenarios where we could easily achieve a significant boost in productivity. In these four scenarios – which included, for example, developing short-lived scripts – software quality was not a primary concern, so we could move faster by relaxing our usual quality standards.</p>
<p>Here, in this follow-up post, I shift focus from these best-case scenarios to explore <strong>a more realistic situation where quality should not be compromised: introducing new long-lived functionality into the codebase of a live production system</strong>.</p>
<p>Introducing new functionality into our production system brought to light several challenges that may be easily overlooked by teams that are beginning to adopt AI tools. Below, I describe these <strong>challenges</strong> in detail and outline <strong>strategies</strong> for addressing them. These strategies will allow you to speed up your development process while <strong>keeping quality high in the age of AI</strong>.</p>
<h2 id="heading-1-misunderstanding-the-system">1. Misunderstanding the system</h2>
<p>In an AI-powered IDE (such as Cursor), asking an agent to implement a new feature may require the agent to fully understand and reason about your entire project. Any gaps or misunderstandings will lead to problems. For example, the agent may:</p>
<ul>
<li><p>Overlook files that should be modified.</p>
</li>
<li><p>Suggest changes to files that should remain unchanged.</p>
</li>
<li><p>Generate tests for unintended behavior.</p>
</li>
<li><p>Miss tests for expected behavior.</p>
</li>
</ul>
<p>Many factors can lead an agent to misunderstand your system. Some of these factors are intrinsic to the nature of the project and are difficult to address. For example, the system that I describe in my previous <a target="_blank" href="https://mariocervera.com/leveraging-ai-accelerate-software-engineering">post</a> is event-driven. Event-driven systems are inherently decoupled: an event is published and multiple subscribers react, each potentially located in distant and seemingly unrelated parts of the system. Establishing these logical connections by inspecting the code is a difficult task, even for an AI agent.</p>
<p>Other factors, however, are within our control, and we can take proactive action to help the AI agent interpret our system correctly.</p>
<h3 id="heading-11-how-to-mitigate-this-problem">1.1. How to mitigate this problem?</h3>
<p>Harold Abelson and Gerald Jay Sussman stated in their seminal book “<em>Structure and interpretation of computer programs”</em>:</p>
<blockquote>
<p>Programs must be written for people to read, and only incidentally for machines to execute.</p>
</blockquote>
<p>Martin Fowler expressed a similar idea:</p>
<blockquote>
<p>Any fool can write code that a computer can understand. Good programmers write code that humans can understand.</p>
</blockquote>
<p>This way of thinking has positively influenced many software engineers for decades, but it is misaligned with the current technological reality. In the age of AI, machines not only execute code; they care about meaning and intent, making readability relevant for machines as well.</p>
<p>Clear and intent-revealing variable names, combined with strong separation of concerns, a well-designed domain model and a comprehensive test suite (which offers unambiguous examples of the system’s actual behavior) are now more relevant than ever. Writing high-quality code reduces the likelihood that AI agents misinterpret your intentions, making AI agents less likely to make mistakes.</p>
<blockquote>
<p>In the AI era, we must write clean, readable code for both humans and machines, as both need to understand it to apply changes safely and effectively.</p>
</blockquote>
<h2 id="heading-2-loss-of-control">2. Loss of control</h2>
<p>Have you ever engaged in pair programming with a driver who moves at a frenetic pace, jumping between files and changing large chunks of code so quickly that it seems impossible to keep up?</p>
<p>Welcome <strong>the cowboy (or cowgirl) driver</strong>.</p>
<p>The cowboy driver gives little consideration to the navigator. The only priority is speed and the navigator is an obstacle, not a partner. When I am working with such a driver, I feel that I <strong>lose control</strong>. I am unsure whether new bugs are being introduced or whether all relevant tests are in place. I have no time to gather my thoughts and offer meaningful suggestions.</p>
<p>AI agents make this situation more likely. The driver will be producing more code in less time, since most of the code will be AI-generated. This leaves the navigator even less time to inspect it, unless the driver is disciplined about code reviews.</p>
<p>Note that this problem can also arise when programming alone. An AI agent is like a pair-programming partner that produces code at great speed. It is tempting not to review the code carefully. When this happens, you lose control and <strong>the snowball effect</strong> will take over: quality will deteriorate increasingly fast with each passing day.</p>
<h3 id="heading-21-how-to-mitigate-this-problem">2.1. How to mitigate this problem?</h3>
<p>To mitigate this problem, you need to <strong>slow down to regain control</strong>. A technical practice that is well suited to this purpose is <strong>Test-Driven Development (TDD)</strong>. TDD encourages progress in tiny steps, writing one test at a time and ensuring each test passes before moving on to the next test.</p>
<p>From my brief experience with AI-powered IDEs, I have found that, when working with AI agents, you can take slightly larger steps. An AI agent can write a test and make it pass in a single step; or it can handle multiple tests at once, passing them in one go. Your level of confidence will guide you when deciding the size of your steps.</p>
<p>Regardless of your choice, <strong>review your code carefully and do not skip refactoring</strong>. Regular refactoring keeps you close to the code, improves its quality over time, and ultimately lets you move faster in the long run.</p>
<h2 id="heading-3-weakened-learning-loop">3. Weakened learning loop</h2>
<p>Software development is inherently iterative. You add a feature to the system, show it to customers, gather feedback, and iterate. At a lower level, you define expected behavior as an automated test, make the test pass, refactor, and iterate by writing the next test.</p>
<p>There are many levels in between, but they all have one thing in common: <strong>the learning loop</strong>. Each iteration – whether with a customer, a teammate, or yourself – produces valuable new knowledge.</p>
<p>When AI agents enter the picture, the learning loop is weakened. Developers interact less directly with the code and this causes two main problems.</p>
<ol>
<li><p><strong>Reduced familiarity with the codebase</strong>. When most of the code is generated automatically, developers spend less time reasoning about it. This is especially problematic for new team members, who need hands-on interaction with the code to understand how the system actually works.</p>
</li>
<li><p><strong>Eroded technical skills</strong>. Much like forgetting how to divide manually once we start relying on a calculator, writing less code can negatively affect your abilities. Over time, depending heavily on AI can make it harder for you to do technical work without automated assistance. Some people may argue that you will not need these technical skills in the future, but no one can reliably predict how the field will evolve, and sacrificing foundational skills for short-term convenience is a risky trade-off.</p>
</li>
</ol>
<h3 id="heading-31-how-to-mitigate-this-problem">3.1. How to mitigate this problem?</h3>
<p>It may sound obvious, but you can reap the benefits of interacting with the code by interacting with the code. When you ask an AI agent to generate code, <strong>do a thorough code review</strong> and do not hesitate to <strong>edit the code manually</strong>, for example, to add a test that the AI agent missed.</p>
<p>Another great way to stay engaged with the code is through <strong>refactoring</strong>. It has always been tempting to skip refactoring. Most developers make the code work and move on to the next feature, leaving behind suboptimal solutions. You can set yourself apart by making small improvements whenever you identify an opportunity. Not only does this improve code quality, but it also has a positive impact on your learning.</p>
<h2 id="heading-further-advice">Further advice</h2>
<p>You can also follow these general guidelines to fully leverage AI agents and mitigate the problems described above:</p>
<ul>
<li><p><strong>Be specific</strong>. The AI agent knows less than you do. Explaining a technical solution in plain English can be challenging, but it is essential to provide as much context as possible. Imagine that you are explaining the problem to a brilliant junior developer. Despite being highly productive, they still need every detail to understand the problem fully.</p>
</li>
<li><p><strong>Write commands and rules</strong>. In AI-powered IDEs like Cursor, you can store coding conventions and good practices as rules or reusable commands. By doing so, you reduce errors, maintain consistency across the codebase, and it also allows you to focus on problem solving (instead of repeating the same instructions over and over).</p>
</li>
</ul>
<h2 id="heading-conclusions">Conclusions</h2>
<p>AI agents can accelerate software engineering, but they introduce challenges that, if not carefully managed, can negatively affect quality and the health of the team.</p>
<p>This post outlines three of these challenges and proposes practical ways to address them. Practices such as TDD, refactoring and disciplined code reviews, which promote clean and readable code, are essential to ensuring that AI adds value, rather than becoming a source of technical debt.</p>
<p>AI agents will make you produce bad code faster, if you are careless; but, if you are disciplined, they will help you produce good code just as quickly. <strong>AI agents act as amplifiers of your current outputs</strong>. Software teams that harness AI wisely will deliver value more effectively, while those that use it unwisely will gradually lose their ability to satisfy customers, slowing down as problems accumulate.</p>
]]></content:encoded></item><item><title><![CDATA[Leveraging AI to Accelerate Software Engineering]]></title><description><![CDATA[Over the past few months, my team and I have been making extensive use of Artificial Intelligence (AI) – specifically Cursor IDE – in our day-to-day work. It started as a small experiment, but it quickly evolved into a deeper exploration of how Large...]]></description><link>https://mariocervera.com/leveraging-ai-accelerate-software-engineering</link><guid isPermaLink="true">https://mariocervera.com/leveraging-ai-accelerate-software-engineering</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Fri, 12 Dec 2025 00:10:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765498042726/782e5c7a-8b8e-4478-9cfe-eaa6bc690123.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the past few months, my team and I have been making extensive use of Artificial Intelligence (AI) – specifically Cursor IDE – in our day-to-day work. It started as a small experiment, but it quickly evolved into a deeper exploration of how Large Language Models (LLMs) influence our software development process in an enterprise environment.</p>
<p>We are a development team that is strongly committed to software quality, since it is what enables us to deliver business value at a sustainable pace. Extreme Programming (XP) practices, such as Test-Driven Development (TDD) and pair programming, help us keep quality high and also counteract the natural tendency of software to degrade over time.</p>
<p>When we started using AI – with no prior hands-on experience – we were naturally worried about how AI would impact the quality of the software that we delivered. AI might help us deliver value more quickly, but the speed should never come at the expense of software quality, as this would compromise sustainability.</p>
<p>Despite our concerns, we did not conduct empirical assessments to measure code quality. Instead, we relied on regular communication with stakeholders to learn about their perception of how the team’s delivery pace was evolving. We also paid close attention to the code, carefully monitoring whether it was becoming more manageable or more challenging to work with.</p>
<p>I will discuss how to keep – even increase – quality in the age of AI in an upcoming post. In this post, I will relax the keep-quality-high restriction. I will describe four situations where quality is less critical, or where you can expect the quality of the generated code to match the quality of your current code. <strong>In these four specific cases, LLMs can offer a significant increase in productivity</strong> with minimal concern.</p>
<p>Before we dive in, let’s look at a running example that will help me illustrate the key insights in each section of this post.</p>
<h2 id="heading-a-running-example">A running example</h2>
<p>Let’s say that your team is building a WhatsApp chatbot for an insurance company. Whenever a user of the insurance company’s website provides their contact details, they are classified as a potential customer and the bot initiates a conversation. It typically begins with a series of profiling questions that are designed to learn more about the user’s needs and preferences.</p>
<p>Some of the profiling questions that the bot may ask are:</p>
<ul>
<li><p>What type of insurance policy are you looking for?</p>
</li>
<li><p>Are you currently insured with any company?</p>
</li>
<li><p>What type of coverage do you have?</p>
</li>
</ul>
<p>Other questions may be more specific, tailored to a particular type of insurance policy:</p>
<ul>
<li><p>Do you own a home?</p>
</li>
<li><p>How old is your home?</p>
</li>
<li><p>What type of property do you live in?</p>
</li>
</ul>
<p>The information gathered through these questions helps the bot suggest the most relevant insurance options to users, sometimes adapting the recommendations to better suit individual needs.</p>
<h2 id="heading-1-throwaway-code">1. Throwaway code</h2>
<p>One of the services of the WhatsApp chatbot calculates the policy price given a specific user profile. This service is deployed as an AWS Lambda and its logs are available through AWS CloudWatch.</p>
<p>Suppose that you need to download the CloudWatch logs of the past two months and prepare a statistical report. This is a one-off task that has been assigned to you to meet the needs of a particular stakeholder. While you could perform the task manually, the volume of data makes it impractical. Automating the process via a script is a far more efficient solution.</p>
<p>The key observation here is that the script will have no long-term use; therefore, code quality is not a primary concern. This is the perfect opportunity to leverage LLMs for code generation. If you describe the task in natural language and sufficient detail, <strong>you will have the script in a matter of seconds</strong>.</p>
<p>Keep in mind that the script may not fully meet your needs on the first try, but it will provide <strong>a solid starting point</strong>. You can then tweak the script manually, or you can also refine your prompt by adding more specific details to guide the LLM in the right direction.</p>
<h2 id="heading-2-repetitive-tasks">2. Repetitive tasks</h2>
<p>Suppose that a user sends a video or a text message in a WhatsApp conversation. To respond to this action, the WhatsApp chatbot must be implemented as an event-driven system. Whenever a user action occurs, the system is notified by Meta (or any other intermediary) through a <em>WhatsApp Message Received</em> event. In this event-driven context, most new features – at least, those triggered by incoming WhatsApp messages – are added to the system through a new event handler that invokes the appropriate use-case class.</p>
<p>For example, let’s say that you need to add the following feature: if the user types “stop”, you must stop sending marketing messages about new insurance offers. To implement this feature, you will create a new event handler that invokes a stop-marketing-messages use case whenever a “stop” message is received.</p>
<p>This new event handler may be the tenth, eleventh or n-th that you implement. Since they are all similar, <strong>if you ask an LLM to do it, the LLM will have plenty of code to reference and base their decisions on, making its output highly likely to meet your needs</strong>, both in terms of value and code quality.  You don’t have to build the event handler (and its tests) from scratch – just ask the LLM to do it and the LLM will generate the code almost instantly.</p>
<h2 id="heading-3-experimental-context">3. Experimental context</h2>
<p>Let’s say that the insurance company needs to learn about insurance claims or accidents that the user may have had in the last three years. Your team may obtain this information via a new profiling question in the WhatsApp chatbot. However, this approach raises a few concerns.</p>
<p>Many users may be unwilling to answer and could abandon the conversation, resulting in the loss of potential customers. Furthermore, whether users answer or not may be affected by the timing of the question. For example, a user may ignore the question if it appears at the start of the conversation, but they may respond if the question is asked later. In any case, it is essential to understand users’ behaviour to offer the best possible experience and reap maximum benefit.</p>
<p>Your team decides to run an A/B test. In one variant, the new question is placed at the beginning of the conversation, while in the other, the question is introduced later in the series. The goal is to determine whether the order of the questions affects user responses.</p>
<p>In this scenario, you know that the code you add will be removed – or, at least, modified to retain only one variant – once the test concludes.</p>
<p>An LLM can save you a significant amount of time. <strong>If the prompts that you used to run the A/B test are available</strong> – for example, you stored them as a plan – <strong>then you can simply ask the LLM to remove one variant or both</strong>.</p>
<h2 id="heading-4-navigating-legacy-code">4. Navigating legacy code</h2>
<p>Imagine that you are a new member of the team and discover that the WhatsApp chatbot does not initiate conversations with all users who submit their contact information on the company’s website. In order to understand the conditions that trigger new conversations, you examine the codebase. However, you soon realize that the task will be challenging: the logic is scattered across multiple files and the code is difficult to follow.</p>
<p><strong>You can ask an LLM to interpret the code for you and explain the conditions in plain English</strong>. Understanding natural language is easier than understanding code, especially if the code was not developed with readability in mind.</p>
<p>Note that the questions you pose to the LLM may be difficult to answer – they may require extensive reasoning and analysis of large fragments of code – but simpler questions can be just as valuable. For example, asking the LLM to explain an individual function, which typically yields a response within seconds, can be extremely helpful when navigating legacy code.</p>
<h2 id="heading-conclusions">Conclusions</h2>
<p>This post outlines four scenarios that give software engineers unique opportunities to leverage LLMs and accelerate their development process. In these scenarios, the usual concern about LLMs – the quality of AI-generated code – is far less critical, which enables substantial productivity gains.</p>
<p>In a subsequent post, I will step away from this “happy-path for AI” and explore a more realistic and challenging case: introducing new long-lived functionality into an existing codebase. I will discuss the key challenges involved and strategies that we can use to address them.</p>
]]></content:encoded></item><item><title><![CDATA[Characterization testing: adding tests to legacy code]]></title><description><![CDATA[Some people feel uneasy when they test-drive code, so they favor the traditional workflow where testing is an after-development activity. Other people, on the contrary, believe that adding automated tests after development is more challenging, so the...]]></description><link>https://mariocervera.com/characterization-testing-adding-tests-to-legacy-code</link><guid isPermaLink="true">https://mariocervera.com/characterization-testing-adding-tests-to-legacy-code</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Testing]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[software development]]></category><category><![CDATA[refactoring]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Sun, 30 Nov 2025 15:25:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764514829058/f073220f-2af5-4d61-a942-8aa2e2d697db.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Some people feel uneasy when they test-drive code, so they favor the traditional workflow where testing is an after-development activity. Other people, on the contrary, believe that adding automated tests after development is more challenging, so they favor a test-first approach.</p>
<p>Even though I belong to the second group, when I am dealing with legacy code I do not get to choose.</p>
<p>Adding tests to legacy code can be painful. Very painful.</p>
<p>When you write tests first, you <strong>know</strong> (or, at least, you have a fair idea about) what the new code needs to do. You have <strong>expectations</strong> for the new code, so you can capture these expectations as tests. In contrast, when you add tests to legacy code, your knowledge about the code can be a major limiting factor. Legacy code is convoluted, obscure, and hard to read, so, most often, you don’t understand the code well.</p>
<p><strong>Lack of understanding</strong> is an insidious problem. Without a well-grounded approach, adding tests to an unfamiliar system is like crossing a busy avenue blindfolded: you may succeed, but the chances of a negative outcome are high. You may produce tests that are not significantly better than no tests at all, and this will make leaving the code untested a reasonable alternative.</p>
<p>This is a serious problem. As Frederick Brooks says: “the only constancy is change itself” [1]. The <strong>need for change</strong> always arises, and you need a comprehensive suite of tests to modify code safely.</p>
<p>As a solution to this problematic situation, Michael Feathers proposes a less-conventional approach to testing that is called <strong>characterization testing</strong> [2].</p>
<p>In this post, I describe what characterization tests are, what benefits they bring, and how we can write characterization tests effectively.</p>
<h2 id="heading-what-are-characterization-tests">What are characterization tests?</h2>
<p>A conventional way to look at automated tests is as documents that describe the expected behavior of the system. The feat of “<em>tests as documentation”</em> is difficult to achieve in legacy code, however. If we don’t understand the code well, the intent of the tests is not in our head; therefore, we don’t have expectations that we can easily articulate. Writing readable and intent-revealing tests seems impossible in this context.</p>
<p>The way out of this dead end is a shift in perspective. If you, just for a moment, relegate documentary value to a secondary role, you realize that writing tests just to invoke the system and observe its output offers <strong>insight</strong> and <strong>learning</strong>.</p>
<p>This perspective is what characterization tests exploit. Characterization testing relaxes the initial readability focus so that you can observe the actual system in operation and capture your observations as automated tests. As your knowledge grows, you will improve the readability of the tests, but the focus will be on making your observations more explicit and intent-revealing.</p>
<blockquote>
<p>Characterization tests allow you to invoke the legacy code, observe what it does, and understand its behavior. When you write characterization tests, you do not document your expectations. You characterize the <strong>actual</strong> behavior of the system.</p>
</blockquote>
<h2 id="heading-benefits-of-characterization-testing">Benefits of characterization testing</h2>
<p>The main benefit of characterization testing is that you <strong>improve your understanding</strong> about the legacy code. You add one test at a time, and this allows you to learn about the system incrementally.</p>
<p>This is not the only benefit, however. While you improve understanding, you gain a suite of tests almost for free. This suite, similarly to any other form of automated testing, helps you <em>preserve</em> the behavior of the system. If you change some behavior unintentionally, the characterization tests will notice and warn you. Characterization tests give you a safety net that <strong>reduces risk when applying changes</strong>.</p>
<p>An equivalent way to look at the benefit of risk reduction is <strong>bug prevention</strong>. If you run the tests often enough, every time they catch an error, the live time of this error has been a few seconds or minutes, instead of days, weeks or even months. Characterization tests help you detect errors early, when they are cheaper to fix.</p>
<blockquote>
<p>Characterization tests help you improve your understanding of legacy code, reduce risk when applying changes, and make errors easier to detect and fix.</p>
</blockquote>
<h2 id="heading-preserving-behavior-of-legacy-code">Preserving behavior of legacy code</h2>
<p>A common argument against characterization testing is that legacy code is often buggy. Why would you want to preserve behavior?</p>
<p>When a system has been in operation for a non-negligible amount of time, users depend on the way it works. Some behavior may look defective to you, but it is possible that users rely on that behavior.</p>
<p>Michael Feathers says:</p>
<blockquote>
<p>When a system goes into production, in a way, it becomes its own specification. We need to know when we are changing existing behavior regardless of whether we think it's right or not.</p>
</blockquote>
<p>Preserving behavior is important, but, when you write characterization tests, it is common to come across behavior whose correctness raises reasonable doubt. If you suspect that some behavior is a bug, get the opinion of other stakeholders. If it is a bug, go ahead and fix it.</p>
<h2 id="heading-how-to-write-characterization-tests">How to write characterization tests</h2>
<p>There is one thing that you will not do when you write characterization tests: looking at functional specifications.</p>
<p>Functional specifications, if they exist, state what the system is supposed to do, not what it actually does. We don’t look for mismatches between the behavior that we expect and the behavior that the system exhibits. <strong>Characterization testing is not bug search; it is characterization of actual behavior</strong>. Therefore, we will look where the only truth about the system behavior lies: the code.</p>
<p>Looking at the code to write tests is not a bad thing. Characterization tests are white-box tests, and we can use this fact to our advantage. For example, we can use code coverage and mutation testing tools to help us decide what tests to write next. Assisted by these tools, we can achieve a comprehensive suite of tests more easily.</p>
<h3 id="heading-an-algorithm-for-characterization-testing">An algorithm for characterization testing</h3>
<p>Michael Feathers suggests the following algorithm to write characterization tests:</p>
<ol>
<li><p>Put a piece of code in a test harness; that is, call it from a test.</p>
</li>
<li><p>Write an assertion that you know will fail.</p>
</li>
<li><p>Let the failure tell you what the behavior is.</p>
</li>
<li><p>Change the test so that it expects the behavior that the code produces.</p>
</li>
<li><p>Repeat.</p>
</li>
</ol>
<p>Step 1 is usually the hardest. You may want to call a method from a test, but, to do so, you need to instantiate the class that contains the method. Instantiating this class can be tough if it has undesired side effects, such as accessing a database or loading an expensive resource. You may need to break dependencies first.</p>
<h3 id="heading-an-example-of-characterization-test">An example of characterization test</h3>
<p>Steps 2 to 5 are the steps where you write characterization tests.</p>
<p>Suppose you want to add tests for a function called “<em>padString”</em>. You look at the code of this function, and, apparently, it fills an input string number with 0s so that it contains a certain number of digits. It also looks like it removes certain characters. However, you are not sure because the code is hard to understand.</p>
<p>Looking at the code, you are almost certain that, if you pass “3.45” into the function, it will not return “abc”, so you write the following failing test:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764515002740/1b7fbaa3-9341-434a-bca6-678ee39b1aa2.png" alt class="image--center mx-auto" /></p>
<p>The name of the test is deliberately vague. You don’t have enough knowledge at this point to come up with a good intent-revealing name that states the behavior under test.</p>
<p>You run the test, and, as expected, it fails. You look at the assertion failure and observe that the actual output of the method is “0000000345”. Now, you know more that you did before running the test.</p>
<p>At this point, you can update the test so that it asserts (and preserves) the actual behavior of the <em>“padString”</em> method. And, if you feel comfortable enough, you can also improve the test name:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764515082612/25b0ddde-7b3c-41a8-a117-c7ca65306b30.png" alt class="image--center mx-auto" /></p>
<p>After the test update, you can continue to write more tests. You will stop when you are satisfied with your understanding of the <em>“padString”</em> method and when the tests allow you to apply safely the changes that you want to apply.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Characterization tests offer a different perspective on testing.</p>
<p>Unlike test-first approaches (such as Test-Driven Development), the focus is not on driving development with tests. Unlike other test-later approaches (such as manual exploratory testing), the focus is not on finding bugs. Instead, characterization testing focuses on (1) characterizing the behavior of legacy code to understand what it actually does and (2) preserving its behavior under changes.</p>
<p>Increasing understanding about code is key to minimize the chance of errors when we modify it. This chance of errors is also greatly minimized by the suite of tests that we get almost for free.</p>
<h2 id="heading-references">References</h2>
<p>[1] <em>The Mythical Man-Month: Essays on Software Engineering</em>. Brooks, F. P. (<a target="_blank" href="https://es.wikipedia.org/wiki/1975">1975</a>, 2ª ed. <a target="_blank" href="https://es.wikipedia.org/wiki/1995">1995</a>).</p>
<p>[2] <em>Working Effectively with Legacy Code</em>. Feathers, M. Prentice Hall Professional (2004).</p>
]]></content:encoded></item><item><title><![CDATA[Common test smells]]></title><description><![CDATA[Most of us are familiar with the problems exhibited by the systems that contain design smells (understanding the term “smell” as defined in Martin Fowler’s book Refactoring: Improving the Design of Existing Code).
Some of these problems are:

The sys...]]></description><link>https://mariocervera.com/common-test-smells</link><guid isPermaLink="true">https://mariocervera.com/common-test-smells</guid><category><![CDATA[Testing]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[code smell ]]></category><category><![CDATA[Design]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Tue, 21 Sep 2021 16:21:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632239546859/qlckgyDZ0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most of us are familiar with the problems exhibited by the systems that contain design smells (understanding the term “smell” as defined in Martin Fowler’s book <em>Refactoring: Improving the Design of Existing Code</em>).</p>
<p>Some of these problems are:</p>
<ul>
<li><p>The system is hard to understand.</p>
</li>
<li><p>The system is hard to change because every simple change forces you to modify many other parts of the system.</p>
</li>
<li><p>The system is immobile because its parts are so tightly coupled that they cannot be reused independently.</p>
</li>
</ul>
<p>Many books offer techniques to deal with these problems. A notable example is Martin Fowler’s refactoring book, which describes ways to improve the structure of code without altering its observable behavior. Other examples are <em>Code Complete</em> by Steve McConnell and <em>Working Effectively with Legacy Code</em> by Michael C. Feathers.</p>
<p>Reading books about good Software Engineering practices is a great way to grow in your career. However, I have observed that some developers tend to apply the acquired knowledge to production code only: they do not treat test code with the same care.</p>
<p>This is unfortunate. Test code should be kept to the same quality standards as production code. Otherwise, the quality of the production code will decline over time because you cannot refactor safely (without good tests).</p>
<p>This is why smells in test code are as bad as smells in production code; and they can even be worse.</p>
<p>If the production code has design smells but the tests don’t, the tests give you a safety net that allows you to fix the smells through refactoring. But, when the problem is in the tests, how do you fix it?</p>
<p>This post is not about techniques to improve test code. Rather, it describes five common test smells that can help you identify when it may be necessary to take corrective actions.</p>
<h3 id="1-fragile-tests">1. Fragile tests</h3>
<p>When some behavior changes in the system, it is expected that the tests that assert the old behavior fail. After all, the system does not exhibit the old behavior anymore.</p>
<p>This is the very reason why we write tests in the first place. We may change the behavior of the system unintentionally, and, when this happens, we want the tests to fail and warn us. This is how tests prevent bugs.</p>
<p>But, if we modify code without changing behavior and tests fail anyway, then the tests are failing for no valid reason.</p>
<p>These tests are fragile.</p>
<p>A fragile test is a test that breaks easily. It is a test that fails when it should not fail. It is a test that imposes a heavy burden because we are forced to revisit it often.</p>
<p>When we are forced to revisit tests more often than we should, we cannot change and improve code comfortably.</p>
<blockquote>
<p><strong>A good suite of tests makes refactoring easier. When the tests are fragile, the effect is the opposite.</strong></p>
</blockquote>
<h3 id="2-slow-tests">2. Slow tests</h3>
<p>Tests may be slow.</p>
<p>A common cause is access to external sources, such as the file system, a database, or a distributed service.</p>
<p>The most obvious consequence of slow tests is that productivity decreases. We waste precious seconds every time we run the tests.</p>
<p>But there are other, less obvious, consequences: our ability to prevent bugs decreases and bugs become more expensive to fix.</p>
<p>Our ability to prevent bugs decreases because the key to bug prevention is getting immediate feedback about code changes. If we don’t get this feedback, we can introduce bugs and discover them the next day, or the next week, if we ever do it. We get immediate feedback by running the tests with every change to the system; however, this is not practical if the tests are slow.</p>
<p>A nasty side effect is that bugs suddenly become more expensive to fix because they are discovered late, when the context is not fresh in our minds.</p>
<blockquote>
<p><strong>When tests take too long to run, we can't run them often enough. Productivity decreases and we lose our ability to deal with bugs in a cost-effective way.</strong></p>
</blockquote>
<h3 id="3-obscure-tests">3. Obscure tests</h3>
<p>Automated tests have some <a target="_blank" href="https://mariocervera.com/non-obvious-benefits-automated-testing">non-obvious benefits</a>.</p>
<p>One of them is that automated tests give us <strong>examples</strong> about how the system is used at the code level. Therefore, they help us understand the system better.</p>
<p>Another benefit is that tests offer <strong>defect localization</strong>. Tests must fail when we introduce defects; that is, when the behavior of the system changes in ways that it should not change. Ideally, the tests will help us locate the problem easily.</p>
<p>If we want the tests to be useful code examples and to help us locate defects effectively, the tests must be readable, not obscure. If the tests are obscure, their benefits are minimized because it is not easy tell what the tests are testing. We can only reap the benefits of testing when the cause-effect relationship between the inputs and the outcomes of the tests is crystal-clear and easy to identify at first sight.</p>
<blockquote>
<p><strong>Tests must be intent-revealing; otherwise, they lose most of their value. The system becomes harder to understand and defects harder to diagnose.</strong></p>
</blockquote>
<h3 id="4-tests-with-conditional-logic">4. Tests with conditional logic</h3>
<p>When you introduce conditional statements and loops in a test, the complexity of the test increases. After a certain (very low) threshold, you cannot be sure that the test is bug-free and it works as expected.</p>
<p>You need automated tests for the tests.</p>
<p>But then you need tests for the tests of the tests. When does this recursion stop?</p>
<p>The “trick” is to write tests that are so simple that they can be easily seen correct, thereby not requiring testing. This happens, for example, when tests contain only a few sequential statements.</p>
<blockquote>
<p><strong>When a test contains branches or loops, it is more complex than it should. Complex tests can hide subtle bugs.</strong></p>
</blockquote>
<h3 id="5-assertion-roulette">5. Assertion roulette</h3>
<p>Tests must verify a single condition.</p>
<p>Another way of expressing the same concept is that tests must assert a single expected behavior.</p>
<p>This is easy to say. However, it is hard to get an idea about what exactly a single condition (or a single behavior) is. These notions are subjective.</p>
<p>What works best for me is thinking about tests in terms of their three well-known phases: arrange, act, and assert.</p>
<p>Testing a single condition does not imply that there is only one “physical” assert statement. It implies that there is a single act phase and a single assert phase within the same test. We avoid a series of act-assert pairs.</p>
<p>The problem with alternating act and assert phases in the same test is that, when the test fails, it is hard to determine the failing assertion because the test verifies multiple behaviors. When this happens, we say that we are experiencing <em>assertion roulette</em>.</p>
<blockquote>
<p><strong>Tests should have only one unambiguous reason to fail.</strong></p>
</blockquote>
<h2 id="conclusion">Conclusion</h2>
<p>If you think there are test smells that do not appear in this post, you are right. The nature of the problems that can arise during testing is too diverse to fit in a single post.</p>
<p>For example, another type of test smell is erratic tests: tests that behave in an apparently non-deterministic way. Sometimes they pass and sometimes they fail, and it is not clear why this happens or how to obtain predictable results.</p>
<p>Despite the necessary incompleteness of this post, I hope that it gives you an idea of how bad tests look like and the problems they can bring. Hopefully, this will motivate you and, next time you come across a bad test, you will feel the urge to act and fix it.</p>
]]></content:encoded></item><item><title><![CDATA[Code Structure vs Behavior in TDD]]></title><description><![CDATA[Recently, I wrote a post about my talk at the 1st International Conference on Test-Driven Development (TDD).
The post covers the part where I discuss about the notion of robust tests and how unit size impacts robustness.
In that post, I leave one que...]]></description><link>https://mariocervera.com/code-structure-vs-behavior-in-tdd</link><guid isPermaLink="true">https://mariocervera.com/code-structure-vs-behavior-in-tdd</guid><category><![CDATA[General Programming]]></category><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Design]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Thu, 12 Aug 2021 16:11:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1628713408929/QA8Rpe_tZ.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I wrote a <a target="_blank" href="https://mariocervera.com/talk-1st-international-conference-tdd">post</a> about my talk at the 1st International Conference on Test-Driven Development (TDD).</p>
<p>The post covers the part where I discuss about the notion of robust tests and how unit size impacts robustness.</p>
<p>In that post, I leave one question unanswered: how can we decide the right size of a unit?</p>
<p>This question is important. After all, there seems to be correlation between the size of the units under test and test robustness.</p>
<p>Here, I address the question from two different perspectives — code structure and behavior — and I show that each perspective aligns with a different style of TDD — outside-in and classic.</p>
<p>As we will see, both TDD styles help us write tests that are robust.</p>
<h2 id="outside-in-tdd-is-about-code-structure">Outside-in TDD is about code structure</h2>
<p>In outside-in TDD, you write system-level scenarios (i.e., customer tests), and then you make them pass by proceeding towards inner layers of the software, guided by lower-level tests. At each step, you identify responsibilities and distribute them in different modules (the systems under test, or SUTs). This approach requires <em>test doubles</em> (a.k.a. mocks) to stand in for the modules that are not developed yet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628711977209/Ytt7W-L_W.png" alt="OutsideInTDD.png" /></p>
<p>Test doubles can be seen as a design tool. When we are working on a module, we identify its needs; then, we decide whether the current module implements these needs or delegates them to depended-on modules. In the latter case, test doubles help us design the required interfaces.</p>
<p>Observe the relevance of <strong>code structure</strong>. We analyze responsibilities, and then we decide which modules to create, how they interact, and where to place test doubles. The resulting modules and test doubles impact the size of our units and the tests that we will subsequently write.</p>
<h4 id="deciding-unit-size">Deciding unit size</h4>
<p>In this <a target="_blank" href="https://www.youtube.com/watch?v=KyFVA4Spcgg">talk</a>, Sandro Mancuso explains how we can reason about where to place test doubles.</p>
<p>Consider a class <em>A</em> that uses classes <em>B</em> and <em>C</em>. If you remember UML, you know that there are two types of associations: composition and aggregation.</p>
<p>In composition, <em>B</em> and <em>C</em> can be considered as part of <em>A</em>. If we incorporated <em>B</em> and <em>C</em> into <em>A</em>, <em>A</em> would still be cohesive. In this case, it makes sense to consider the three classes as a unit.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628712569938/meVuj6Zldv.png" alt="Composition.png" /></p>
<p>For example, let’s suppose <em>A</em> implements a tax-calculation algorithm. The algorithm checks marital status, sources of income, etc. These checks can go in different classes (<em>B</em> and <em>C</em>) but, if they didn’t, <em>A</em> would still be cohesive because all the logic has the same responsibility.</p>
<p>When we have aggregation, <em>B</em> and <em>C</em> are not part of <em>A</em>. If we incorporated <em>B</em> and <em>C</em> into <em>A</em>, this class would not be cohesive. In this case, three units may make more sense.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628712518532/klIUrYlrA.png" alt="Aggregation.png" /></p>
<p>For example, let’s suppose <em>A</em> implements the check-out process of an e-commerce application. This process involves several steps: payment, user notification, etc. Each step is complex, so it can go in a separate class (e.g., <em>B</em> can deal with payment and <em>C</em> can be an email service). If we incorporated <em>B</em> and <em>C</em> into <em>A</em>, this class would deal with unrelated responsibilities. Therefore, it makes sense to test the check-out process in isolation and mock the other classes.</p>
<h4 id="common-pitfalls">Common pitfalls</h4>
<p>We must watch out for common pitfalls in outside-in TDD.</p>
<ul>
<li><p><strong>Small units</strong>: overuse of test doubles leads to small units, higher coupling, and tests that are less robust.</p>
</li>
<li><p><strong>Test class per production class</strong>: in extreme cases, the structure of the tests will mirror the structure of the code. This negatively affects robustness: every time we remove classes, tests will fail.</p>
</li>
<li><p><strong>Breaking encapsulation</strong>: when we focus on code structure, it is tempting to test every method in every class, even if they are private. Private logic changes often, so tests will break often.</p>
</li>
</ul>
<h4 id="how-to-increase-robustness">How to increase robustness?</h4>
<p>Backtracking. Once the modules in the inner layers are available, we can remove test doubles to increase the size of the units. We can also delete unnecessary tests.</p>
<p>But remember that small units also have benefits. You will not always remove test doubles (or tests), but this possibility is a good resource to have in our mental toolbox.</p>
<h2 id="classic-tdd-is-about-behavior">Classic TDD is about behavior</h2>
<p>Classic TDD, as described by Kent Beck in the “Test-Driven Development by Example” book, focuses on <strong>behavior</strong>, and code structure plays a less prominent role.</p>
<p>This behavior-centric perspective was not obvious to me when I read the book. I became fully aware when I watched this brilliant <a target="_blank" href="https://www.youtube.com/watch?v=EZ05e7EMOLM">talk</a> by Ian Cooper, where he makes the following observation:</p>
<blockquote>
<p>Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement.</p>
</blockquote>
<p>Creating a new structure of code, such as a function or a class, is not sufficient reason to write new tests. You write new tests when the system must exhibit new behavior.</p>
<p>This is how behavior drives development. You specify one behavior at a time, as automated tests; and, every time you write a new test, you make it pass, making sure that the test runs in isolation from other tests — when tests are independent, they are more robust.</p>
<p>And you don’t use test doubles, other than e.g. to replace external entities that introduce indeterminism or negatively affect speed. You don’t isolate units of code. The unit of isolation is the test, not the system under test.</p>
<p>This is why the popular notion of unit test does not match classic TDD.</p>
<p>This is explicit in Kent Beck’s book. He only uses the term “unit test” once: to say that tests in TDD are not unit tests:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628713005409/NAWsCpgaS.png" alt="SmallScaleTests.png" /></p>
<p>Kent Beck uses the term “small-scale tests”. Other terms are “developer tests” (by Ian Cooper) and “micro tests” (by GeePaw Hill). The common factor is the focus on behavior, not code structure.</p>
<p>To give us a feel for what behavior looks like, Kent Beck offers some examples in his book.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628713046763/cRnKf2h3Z.png" alt="Behaviors.png" /></p>
<p>When he discusses about tests, he states behavior: “we need to be able to add amounts in two different currencies”. He does not say: “we need to call an <em>add</em> method in a <em>MoneyCalculator</em> class". No one gives you such a requirement.</p>
<p>So, when you formulate tests (e.g., using the <em>Given-When-Then</em> notation), make sure they state behavior:</p>
<ul>
<li><strong>GIVEN</strong>: two positive integer numbers.</li>
<li><strong>WHEN</strong>: we multiply the numbers.</li>
<li><strong>THEN</strong>: we obtain correct multiplication as a result.</li>
</ul>
<p>Not code structure and implementation details:</p>
<ul>
<li><strong>GIVEN</strong>: two 32-bit positive integer numbers.</li>
<li><strong>WHEN</strong>: we invoke the <em>multiply</em> method on the <em>Calculator</em> class.</li>
<li><strong>THEN</strong>: the method returns the correct result as a 32-bit integer number.</li>
</ul>
<h4 id="advantages-of-behavior-oriented-tests">Advantages of behavior-oriented tests</h4>
<p>The focus on behavior of classic TDD has several advantages:</p>
<ul>
<li><p><strong>Tests are more robust</strong>: code structure are volatile implementation details; behavior is at a higher and more stable level of abstraction.</p>
</li>
<li><p><strong>Tests are more intent-revealing</strong>: when tests verify small units of code, it may be hard to see the big picture. Tests that specify high-level behavior allow you to understand requirements better.</p>
</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Classic TDD focuses on behavior, while code structure plays a more central role in outside-in TDD.</p>
<p>In outside-in TDD, you make some design decisions upfront, and these decisions affect the resulting code structure, the size of the units, and the tests that you write. Test robustness depends strongly on unit size. By contrast, in classic TDD, upfront design is minimized; design emerges as you make tests pass and refactor. The units of isolation are the tests, so test robustness depends on this isolation and on the focus on high-level behavior.</p>
<p>Outside-in TDD is not all about code structure; you also specify behavior: the behavior of isolated units of code. Classic TDD is not all about behavior; you need code structure to access the behavior in the system. But tests are unaware of internal implementation details; that is, tests have no knowledge of how the behavior is partitioned.</p>
<p>For more details on the topic of this blog post, you can watch my talk at the TDD conference here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=APFbb5MwLEU&amp;list=PLJ3Q-TNrdsXi-och0A0PaXKojDlxv4YsB">https://www.youtube.com/watch?v=APFbb5MwLEU&amp;list=PLJ3Q-TNrdsXi-och0A0PaXKojDlxv4YsB</a></div>
]]></content:encoded></item><item><title><![CDATA[My Talk at the 1st International Conference on TDD]]></title><description><![CDATA[Last July 10th, we could witness the first International Conference on Test-Driven Development (TDD).
It was a historic event. The lineup included big names such as GeePaw Hill and one of the original signatories of the Agile Manifesto: James Greenin...]]></description><link>https://mariocervera.com/talk-1st-international-conference-tdd</link><guid isPermaLink="true">https://mariocervera.com/talk-1st-international-conference-tdd</guid><category><![CDATA[General Programming]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[TDD (Test-driven development)]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Design]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Wed, 28 Jul 2021 10:31:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1627425676168/7cRzaNlqr.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last July 10th, we could witness the first International Conference on Test-Driven Development (TDD).</p>
<p>It was a historic event. The lineup included big names such as GeePaw Hill and one of the original signatories of the Agile Manifesto: James Greening. Kent Beck himself - the (re)inventor of TDD - announced the conference in his Twitter account, which, along with the effort of the organizers, helped reached the non-negligible number of 2000 people registered.</p>
<p>The conference covered a wide range of topics, including a talk about TDD in embedded systems by my friend Francisco Climent, and also a memorable live demo of Test &amp;&amp; Commit || Revert (TCR) by organizer Alex Bunardzic.</p>
<p>The event was broadcasted live on YouTube. You can find the full recording here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=-_noEVCR__I">https://www.youtube.com/watch?v=-_noEVCR__I</a></div>
<p>I had the honor to talk in this conference.</p>
<p>The title of my talk was:</p>
<p><strong>"On the relationship between units of isolation and test coupling - How to write robust tests with TDD"</strong>.</p>
<p>I admit that this title reads like an academic paper. This is not necessarily bad, but, if you are not used to this type of writing, the title may not immediately convey to you the message that I want to express.</p>
<p>Hopefully, this post will clear things up for you. Here, I explain the key concepts of my talk: <strong>robust tests</strong>, <strong>test coupling</strong>, and <strong>units of isolation</strong>. In subsequent posts, I will build on this foundation to explain how understanding these concepts will help us write robust tests using TDD. </p>
<p>You can watch my talk here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=APFbb5MwLEU&amp;list=PLJ3Q-TNrdsXi-och0A0PaXKojDlxv4YsB&amp;index=5">https://www.youtube.com/watch?v=APFbb5MwLEU&amp;list=PLJ3Q-TNrdsXi-och0A0PaXKojDlxv4YsB&amp;index=5</a></div>
<h1 id="robust-tests">Robust tests</h1>
<p>It is expected that tests fail when the behavior they verify changes.</p>
<p>This is how tests prevent bugs. We may change the behavior of the system unintentionally, and, when this happens, we want the tests to fail and warn us. Bug prevention is one of the main benefits of automated testing.</p>
<p>But, if behavior does not change and tests fail anyway, then we have tests that fail for no valid reason.</p>
<p>These tests are fragile.</p>
<p>A fragile test is a test that breaks easily. It is a test that fails when it should not fail. It is a test that imposes a heavy burden because we are forced to revisit it often.</p>
<p>Tests should not be fragile; they should be robust.</p>
<p>Robust tests only fail when they should. We can change and improve the structure of the code without altering observable behavior and the tests remain green. This is how tests become a valuable aid, not a burden.</p>
<blockquote>
<p>Automated tests should aid refactoring, not impede it.</p>
</blockquote>
<h1 id="test-coupling">Test coupling</h1>
<p>The main reason why tests become fragile is <em>coupling</em>.</p>
<p>Coupling kills software. This is true for both software applications and test code. Software modules must be loosely coupled to each other. Automated tests must be loosely coupled to the system under test.</p>
<p>To illustrate test coupling, let’s suppose we have a class <em>A</em> that uses two classes <em>B</em> and <em>C</em>. We could reasonably write the following test:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1627464277397/scOsTb5GG.png" alt="TestExample.png" /></p>
<p>Here, for simplicity, I am loosely speaking about classes <em>B</em> and <em>C</em> as if they were functions, but I think you get the idea.</p>
<p>The design of this test has an important implication: the test knows all the classes (<em>A</em>, <em>B</em> and <em>C</em>). This may be too much knowledge for a test. If we want to edit or remove one of the classes, or add new classes, the test will probably need to be updated. This is <strong>high coupling</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1627424739274/2WUHCQGzK.png" alt="HighCoupling.png" /></p>
<p>This test smell is sometimes called <em>overspecified software</em>. A test is a specification of behavior, and this test is specifying a lot. It specifies <em>how</em> class <em>A</em> must work internally. It’s specifying the exact algorithm; not <em>what</em> class <em>A</em> should achieve or the results that it should obtain. The test is coupled to the implementation details of <em>A</em>. </p>
<p>But this type of testing is not the only way to test software. Remember the notion of <em>encapsulation</em>. We could write a test that has a single dependency with <em>A</em> and is unaware of classes <em>B</em> and <em>C</em>. For example, a test can invoke a method in <em>A</em> and assert that the method returns a specific value.</p>
<p>In this case, we can modify <em>B</em> or <em>C</em>, remove any of them or add new classes, and the tests will not need to be updated. This is how tests become robust. This is <strong>low coupling</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1627424887920/uOplmhOWEY.png" alt="LowCoupling.png" /></p>
<h1 id="units-of-isolation">Units of isolation</h1>
<p>One of the implications of the dependencies that the last picture shows is that classes <em>A</em>, <em>B</em> and <em>C</em> form a <em>unit of isolation</em>: when the tests run, code in <em>A</em>, <em>B</em> and <em>C</em> (and only in these classes) will be executed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1627425398600/rzj8bZlUJ.png" alt="Unit1.png" /></p>
<p>Here, class <em>A</em> acts as an interface or facade for the tests, and classes <em>B</em> and <em>C</em> are implementations details that are internal to the unit. The tests are unaware of these implementation details, and, therefore, the coupling is low.</p>
<p>This contrasts with the first example, where <em>B</em> and <em>C</em> had to be mocks so that the expectations of the test could be set in the arrange step. In this case, only code from <em>A</em> is executed when the tests run, and, therefore, <em>A</em> is the unit of isolation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1627425408222/ebO9zXQ0n.png" alt="Unit2.png" /></p>
<p>Apparently, the notion of unit is important and related to test coupling. In the high-coupling example, the unit is smaller; in the low-coupling example, the unit is bigger.</p>
<p>This reasoning suggests that the bigger the unit, the better, but it is not so easy.</p>
<p>When a test fails and the unit under test is small (e.g., a single method or a single class), it’s more likely that we can identify the cause of the problem easily. However, higher coupling increases fragility, and, consequently, unnecessary rework.</p>
<p>It’s a trade-off between defect localization and maintenance costs.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Unit size impacts test coupling, and, as a consequence, the fragility of the tests. Furthermore, we saw that both small and big units have advantages and disadvantages.</p>
<p>This naturally leads to the question: how can we decide the right size of a unit?</p>
<p>In subsequent posts, I will explore how we can address this question. And I will also show that, if we follow classic TDD, we can obtain robust tests by looking at units of isolation from a slightly different perspective.</p>
]]></content:encoded></item><item><title><![CDATA[Refactoring long lists of parameters]]></title><description><![CDATA[There seems to be no agreement in the software engineering community about how many parameters are too many for a function.
This should come to no surprise.
The ideal number of parameters for a function need not be the same as the ideal number of par...]]></description><link>https://mariocervera.com/refactoring-long-lists-of-parameters</link><guid isPermaLink="true">https://mariocervera.com/refactoring-long-lists-of-parameters</guid><category><![CDATA[General Programming]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[clean code]]></category><category><![CDATA[refactoring]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Wed, 10 Feb 2021 17:35:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1612913148246/GIoxQwZ2d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There seems to be no agreement in the software engineering community about how many parameters are too many for a function.</p>
<p>This should come to no surprise.</p>
<p>The ideal number of parameters for a function need not be the same as the ideal number of parameters for another function. Furthermore, even if we focus only on a single function, the perceptions of different software engineers (who may have different backgrounds) will vary.</p>
<p>The issue is very subjective.</p>
<p>Proof of this subjectivity is that, if we look at several well-known resources, we will find different opinions.</p>
<p>The <em>Code Complete</em> book advises us to limit the number of parameters to about seven. The reason is that seven seems to be a magic number for people’s comprehension: people generally cannot keep track of more than about seven chunks of information at once.</p>
<p>Not surprisingly, the <em>Clean Code</em> book adopts a more aggressive stance, asserting that three arguments should be avoided whenever possible and more than three requires very special justification.</p>
<p>This rough upper bound of three parameters is closer to how I feel. Around four parameters, I start to feel uncomfortable, and my feelings get increasingly worse when I reach five parameters or more.</p>
<p>Five parameters or more may seem uncommon to you, but plenty of codebases are littered with functions that have six, seven, and even more. This is unfortunate because long lists of parameters severely degrade software quality. </p>
<p>Raising awareness of the problems of long lists of parameters, and showing how we can shrink these lists, is the main motivation of this post.</p>
<p>Hopefully, next time you come across a long list of parameters, you will feel motivated to refactor the code towards smaller, more cohesive, and higher-quality functions.</p>
<h2 id="1-problems-of-long-lists-of-parameters">1. Problems of long lists of parameters</h2>
<p>Long lists of parameters negatively affect three desirable properties of software: readability, maintainability, and testability.</p>
<h3 id="readability">Readability</h3>
<p>When a function has many parameters, reading its signature or a call to the function requires a lot of mental effort. And, as the number of parameters grows, mental effort increases exponentially.</p>
<p>Any time you need to switch to full-concentration mode to understand individual statements, reading code becomes a pain. Statements should be so straightforward that they glide through your eye. It is only when statements are so trivially simple that reading code becomes a pleasure.</p>
<p>Which of the following statements is easier to understand?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1612909984640/KO-glwLle.png" alt="CodeSnippet_1.png" /></p>
<h3 id="maintainability">Maintainability</h3>
<p>An important problem with function parameters is that, whenever they change, the callers of the function must be updated. This will happen more often as the number of parameters grows larger.</p>
<p>Also, the more parameters a function has, the more information the callers need to know to invoke it. This increases the coupling between the function and its callers.</p>
<p>The higher the number parameters and the tighter the coupling, the bigger the pain to maintain the function.</p>
<h3 id="testability">Testability</h3>
<p>To prove that a function works, you would have to write tests for every conceivable input.</p>
<p>For any non-trivial function, this is impossible. You must choose tests wisely.</p>
<p>Choosing tests wisely is easy when a function has 0 or 1 parameters, but it becomes increasing harder with every new parameter because more combinations come into play.</p>
<p>Functions that have many parameters are extremely hard to test.</p>
<h2 id="2-refactoring-towards-better-design">2. Refactoring towards better design</h2>
<p>To reduce the number of parameters of a function, you have several refactoring alternatives:</p>
<h3 id="create-a-new-data-structure">Create a new data structure</h3>
<p>The parameters of a function often share some kind of logical cohesion. If this is the case, and you find that the set of parameters are consistently used together, it may be wise to group them in a new data structure or class.</p>
<p>This has the additional benefit of making the relationship between the parameters more explicit and the code more intent-revealing.</p>
<p>You can also prevent primitive obsession.</p>
<p>Compare:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1612912366424/uwRxWpO4l.png" alt="CodeSnippet_2.png" /></p>
<h3 id="join-several-functions-into-a-class">Join several functions into a class</h3>
<p>If you are consistently passing the same parameters to different functions, you can group the functions into a class. The parameters will become data members of this class, and, therefore, the functions will be able to access them directly (without requiring input parameters).</p>
<p>In a similar way to the previous solution (creating a new data structure), this solution applies when the new class represents a useful abstraction that deserves its own name.</p>
<h3 id="avoid-what-to-do-parameters">Avoid "what-to-do" parameters</h3>
<p>A boolean parameter often indicates that a function does two different things: one for <code>true</code> and another one for <code>false</code>.</p>
<p>This can easily be generalized to other parameter types, such as integers and strings.</p>
<p>Regardless of their type, we must avoid "what-to-do" parameters that are passed into a function only to select from outside the internal behavior of the function.</p>
<p>This breaks encapsulation.</p>
<p>When a function has several behaviors, the best thing to do is to split the function into smaller functions that do one thing and have less parameters.</p>
<p>Compare:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1612915052626/lIu93J63T.png" alt="CodeSnippet_3.png" /></p>
<h3 id="avoid-output-parameters">Avoid output parameters</h3>
<p>The natural interpretation of parameters is as inputs to the function. This is why input parameters are easier to understand than output parameters.</p>
<p>Whenever you are reading code and you wonder whether a parameter is an output parameter, you are breaking your reading flow.</p>
<p>This should be avoided.</p>
<p>If possible, you can reduce the number of parameters of a function by keeping only the inputs. If you need to change the state of something, it can be the state of the object that owns the function.</p>
<h2 id="to-end-a-short-disclaimer">To end, a short disclaimer …</h2>
<p>In our lives in general, and in software development in particular, almost everything depends on context. Long parameter lists are no exception. They are more accepted in some domains than they are in others. </p>
<p>Therefore, it is always good to keep in mind that, every time we invest a non-trivial amount of time refactoring code, we should make sure that we are solving a real problem.</p>
<p>As far as my experience is concerned, I have always found that short lists of parameters make my life easier. This is why I believe that applying the refactoring techniques from this article is often a good investment.</p>
]]></content:encoded></item><item><title><![CDATA[Are design patterns still relevant?]]></title><description><![CDATA[Design patterns became popular during the 90s, when the “Gang of Four (GoF)” (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides) wrote the well-known book:

"Design Patterns: Elements of Reusable Object-Oriented Software"

This book is one...]]></description><link>https://mariocervera.com/are-design-patterns-still-relevant</link><guid isPermaLink="true">https://mariocervera.com/are-design-patterns-still-relevant</guid><category><![CDATA[design patterns]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Design]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Tue, 29 Dec 2020 17:01:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1609249520393/matKz-9EP.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Design patterns became popular during the 90s, when the “Gang of Four (GoF)” (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides) wrote the well-known book:</p>
<blockquote>
<p>"Design Patterns: Elements of Reusable Object-Oriented Software"</p>
</blockquote>
<p>This book is one of the most influential books ever published in the software industry. However, it was written more than 25 years ago. It is reasonable to wonder whether design patterns are as relevant today as they were when the book came out.</p>
<p>Many would answer ‘No’ to this question. A common argument is that design patterns reside at a lower level of abstraction than most developers work on. In other words, design patterns are encapsulated or “hidden” within the frameworks, standard libraries, and languages that developers use today.</p>
<p>I disagree with this argument, and, even if it was true, I would consider it insufficient because the same reasoning applies to, for example, algorithms, data structures, and memory management. We have advanced tools, but this does not mean that it is not important to know what is under the hood.</p>
<p>If, at this point, you have guessed that my answer to the above question is ‘Yes’, you are correct. I do believe that designs patterns are still relevant, and, in this post, I try to explain the reasons behind my opinion.</p>
<p>To explain these reasons, I don’t discuss about what design patterns are. My focus is on benefits of design patterns that I consider relevant for any developer that is working in the software industry in the present time.</p>
<h3 id="1-design-patterns-give-you-design-knowledge">1. Design patterns give you design knowledge.</h3>
<p>Design patterns represent design solutions that skilled professionals have used again and again in the past to solve recurring problems. Since these solutions incorporate the expertise of qualified professionals, we can expect them to apply good design principles.</p>
<p>Design principles are indeed pervasive in design patterns. For instance, the "State" pattern from the GoF book applies the "Open-Closed Principle" (which is one of the well-known SOLID principles) to make the addition of new states and transitions easier.</p>
<p>This leads to a (probably less conventional) way to look at design patterns: as practical examples that show good design principles in action.</p>
<blockquote>
<p>Design patterns is a mainstream technique for organizing design ideas. When you study design patterns, you do not only learn the patterns. You learn design principles and gain general design knowledge.</p>
</blockquote>
<h3 id="2-design-patterns-give-you-vocabulary-to-talk-about-software-design">2. Design patterns give you vocabulary to talk about software design.</h3>
<p>Can you imagine an architect discussing about a house only in terms of bricks, walls, doors, and similar “low-level” terms?</p>
<p>It would take ages to describe a basic house.</p>
<p>To keep a more economical discourse, architects use “room patterns” such as bedroom, bathroom, and kitchen. This allows them to convey a lot of information with simple sentences such as "a three-bedroom two-bathroom house". The word “bathroom” carries implied information that does not need to be stated explicitly; for example, strong privacy requirements and specialized infrastructure such as sink and drainage.</p>
<p>In a similar way to construction architects, it would take ages for software engineers to describe a basic system if they could only speak in terms of objects and references between them.</p>
<p>In object-oriented design, everything is an object, a reference or a message. This is a useful abstraction, but you need something more, at a higher level. You need patterns with special names that allow you to convey more information with less words.</p>
<p>You need to be able to say: “add an <em>Abstract Factory</em> to this module” or “we can solve this problem by means of <em>Dependency Injection</em>”.</p>
<blockquote>
<p>Design patterns allow you to pack a lot of information in short sentences. Design patterns enable more productive design discussions, when the team is familiar with the patterns.</p>
</blockquote>
<h3 id="3-design-patterns-give-you-refactoring-targets">3. Design patterns give you refactoring targets.</h3>
<p>When all you have is a hammer, everything looks like a nail.</p>
<p>It is common, especially when you are learning about design patterns, to apply them everywhere. A pattern can give you an elegant design solution, but it can also add unnecessary complexity. Abstractions have a price, and paying this price is not justified when the abstractions address the wrong problem.</p>
<p>Extreme Programming (XP) advises us to avoid overengineering and to “do the simplest thing that could possibly work”.</p>
<p>If we keep the design simple and the quality of the code high, we can refactor continuously, easily incorporating new abstractions as they prove necessary. During refactoring, we may detect problems that can be solved by applying particular patterns. In this case, the patterns can act as refactoring targets, giving us guidance and direction.</p>
<blockquote>
<p>Follow the rules of simple design, avoid overengineering, and refactor continuously. When a pattern solves a real problem, use the pattern as a target. Let it guide you in the refactoring process.</p>
</blockquote>
<h3 id="4-design-patterns-can-improve-the-readability-of-your-code">4. Design patterns can improve the readability of your code.</h3>
<p>Jack W. Reeves, in his renowned paper from 1992:</p>
<blockquote>
<p>“What is software design?”</p>
</blockquote>
<p>Suggested that the source code of a software system <strong>is</strong> the design. He observed that the code is the only entity that, similarly to engineering documents from other disciplines, contains enough information to enable the construction of the actual software product. You can draw diagrams, but they are mere guidelines, ancillary to the actual design.</p>
<p>On the other hand, Eric Evans, in his wonderful book:</p>
<blockquote>
<p>“Domain-Driven Design: Tackling Complexity in the Heart of Software”</p>
</blockquote>
<p>Taught us the benefits of keeping the source code as a faithful reflection of the domain model. When changes to the code (likely) mean changes to the model and vice versa, the mapping between the two becomes obvious.</p>
<p>These two authors showed us that there exists a tight connection between source code and design.</p>
<p>This has an interesting consequence: when you apply a design pattern, the pattern should be obvious in the code. This will increase the readability of the code, at least for those readers that are familiar with the pattern.</p>
<p>A pattern being obvious in the code does not necessarily mean that the code uses the pattern names. It means that the pattern is easily recognizable.</p>
<p>For example, you can create a method that defines the skeleton of an algorithm and defers some steps to subclasses. Readers that are familiar with <em>Template Method</em> will immediately recognize the pattern, but this does not mean that the method must contain “template method” in its name.</p>
<blockquote>
<p>If your code evokes design patterns, the readers that know the patterns will easily understand your intent.</p>
</blockquote>
<h1 id="conclusion">Conclusion</h1>
<p>In my opinion, design patterns are as useful nowadays as they have always been, and this post discusses about the four benefits of design patterns that are the reasons behind my belief.</p>
<p>For me, the key observation is that, when you study design patterns, you do not only learn or memorize patterns. You enhance your design vocabulary; you gain general and widely applicable design knowledge; you improve your refactoring strategies; and you enrich the mental toolbox that will allow you to write readable and maintainable code.</p>
<p>These benefits make learning patterns a good investment.</p>
]]></content:encoded></item><item><title><![CDATA[Big O notation and the Bachmann-Landau family]]></title><description><![CDATA[Most of us, software engineers, are familiar with (or, at least, have heard of) the well-known "Big O" notation (O-notation for short). The reasons are diverse. O-notation is a concept that arises frequently in technical interviews, so it is likely t...]]></description><link>https://mariocervera.com/big-o-notation-and-bachmann-landau-family</link><guid isPermaLink="true">https://mariocervera.com/big-o-notation-and-bachmann-landau-family</guid><category><![CDATA[General Programming]]></category><category><![CDATA[algorithms]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[data structures]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Sat, 28 Nov 2020 15:40:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1606577572655/kWz4-faPu.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most of us, software engineers, are familiar with (or, at least, have heard of) the well-known "Big O" notation (O-notation for short). The reasons are diverse. O-notation is a concept that arises frequently in technical interviews, so it is likely that we come across O-notation at some point in our careers. We can also simply like theoretical computer science or work in a domain where the design and analysis of algorithms is relevant.</p>
<p>O-notation allows you to characterize the asymptotic efficiency of algorithms. Using O-notation, you can describe how the running time and the required memory of an algorithm increase with the size of the input, as the size of the input grows without bound.</p>
<p>For example, you can say that the worst-case running time of selection sort is O(n²). Informally, this means that the running time of selection sort is bounded from above by a quadratic function of <em>n</em>, where <em>n</em> is the size of the input.</p>
<p>The focus of this post is not on explaining O-notation, however. Here, I assume that you have basic understanding of O-notation and I focus on an aspect of this notation that is arguably less well-known.</p>
<p>Did you know that O-notation is part of a wider family of related notations?</p>
<p>This family is called: <strong>the Bachmann-Landau notation</strong>.</p>
<p>There are invaluable resources that cover the Bachmann-Landau notation in detail. For example, in these books:</p>
<ul>
<li><p><em>Introduction to Algorithms (3rd edition)</em>. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford. The MIT Press (2009).</p>
</li>
<li><p><em>The Algorithm Design Manual (2nd edition)</em>. Skiena, Steven S. Springer (2008).</p>
</li>
</ul>
<p>You will find formal definitions, detailed examples, exercises, and even other notations that I do not cover here.</p>
<p>In this post, I focus on three notations of the Bachmann-Landau family:</p>
<ul>
<li>O (Big O)</li>
<li>Ω (Big Omega)</li>
<li>ϴ (Big Theta)</li>
</ul>
<p>My description of these notations is deliberately <strong>informal</strong>. My goal is just to give you a feel for what these notations are about and why they are useful.</p>
<h3 id="a-running-example">A running example</h3>
<p>Through this post, I use Quicksort as an example.</p>
<p>Quicksort is a divide-and-conquer comparison-based sorting algorithm. Even though it has a slow worst-case running time, it is often the best practical choice for sorting because it is remarkably efficient on average.</p>
<p>For this post, let's assume a Quicksort implementation that always takes a quadratic number of steps in the worst-case. And let's also assume that <em>n</em> represents the size of the input, which, in this case, is the size of the input array.</p>
<h3 id="a-general-view-of-the-bachmann-landau-notation">A general view of the Bachmann-Landau notation</h3>
<p>In general terms, the asymptotic notations of the Bachmann-Landau family are used to compare functions.</p>
<p>You can express the running time of an algorithm as a function <em>f(n)</em>, and, for example, you can say that <em>f(n) = O(g(n))</em>. This means that, for all sufficiently large values of <em>n</em>, the value <em>f(n)</em> is <strong>less than or equal to</strong> <em>g(n)</em> to within a constant factor.</p>
<p>This comparison-centric point of view is important because the different notations of the Bachmann-Landau family give you the different comparison operators: ≤, ≥ and =.</p>
<h3 id="o-notation">O-notation</h3>
<p>In simple and informal terms:</p>
<blockquote>
<p>O-notation gives you a rough notion of <strong>less-than-or-equal-to</strong> (≤).</p>
</blockquote>
<p>In the Quicksort example, you can say the worst-case running time of Quicksort is <em>O(n²)</em>.</p>
<p>This means that the worst-case of Quicksort takes <strong>at most</strong> a quadratic number of steps.</p>
<p>You could also say that the worst-case of Quicksort is <em>O(n³)</em> because O establishes only an upper bound.</p>
<blockquote>
<p>If you write that <em>f(n) = O(g(n))</em>, then the function <em>g(n)</em> is an <strong>asymptotically upper bound</strong> for <em>f(n)</em>.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606567435629/vK4CyXqwR.png" alt="o_plot_resized.png" /></p>
<p>Observe in the figure (taken from the "Introduction to Algorithms" book) that we do not care about small values of <em>n</em> (that is, anything smaller than <em>n₀</em>). After all, we do not care whether an algorithm sorts 5 items faster than another. We care about large values of <em>n</em>.</p>
<p>Observe also that the upper bound is not actually <em>g(n)</em>. It is <em>g(n)</em> multiplied by some positive constant <em>c</em>. However, O-notation typically ignores such multiplicative constants.</p>
<h3 id="w-notation">Ω-notation</h3>
<p>While O-notation allows you to express upper bounds, Ω-notation gives you the opposite:</p>
<blockquote>
<p>Ω-notation gives you a rough notion of <strong>greater-than-or-equal-to</strong> (≥).</p>
</blockquote>
<p>In the Quicksort example, you can say that the worst-case of Quicksort is <em>Ω(n²)</em>.</p>
<p>This means that, in its worst-case, Quicksort takes <strong>at least</strong> a quadratic number of steps.</p>
<p>You could also say that the worst-case of Quicksort is <em>Ω(n)</em> because Ω establishes only a lower bound.</p>
<blockquote>
<p>If you write that <em>f(n) = Ω(g(n))</em>, then the function <em>g(n)</em> is an <strong>asymptotically lower bound</strong> for <em>f(n)</em>.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606567519217/W8OtswaEz.png" alt="Omega_Plot_resized.png" /></p>
<h3 id="8-notation">ϴ-notation</h3>
<p>Unlike O and Ω notations:</p>
<blockquote>
<p>ϴ-notation gives you a rough notion of <strong>equality</strong> (=).</p>
</blockquote>
<p>For example, you can say that the worst-case of Quicksort is <em>ϴ(n²)</em>.</p>
<p>This means that the worst-case of Quicksort takes <strong>exactly</strong> a quadratic number of steps.</p>
<p>ϴ is a stronger notion than O and Ω because it establishes both an upper and a lower bound.</p>
<p>Therefore, if you write <em>f(n) = ϴ(g(n))</em>, then both <em>f(n) = Ω(g(n))</em> and <em>f(n) = O(g(n))</em> hold.</p>
<blockquote>
<p>If you write that <em>f(n) = ϴ(g(n))</em>, then the function <em>g(n)</em> is an <strong>asymptotically tight bound</strong> for <em>f(n)</em>.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606567598755/dVUd0j4A5.png" alt="Theta_Plot_resized.png" /></p>
<h3 id="conclusion">Conclusion</h3>
<p>There are notable exceptions, but it is likely that you will not use Bachmann-Landau notation as part of your daily job. At least, that’s my experience. But, even then, becoming familiar with it (and with the analysis of algorithms in general) is far from useless. Once you get the hang of it, you start considering trade-offs that never existed for you before. You start thinking about what solution is faster for your problem and what implementation is more memory-efficient.</p>
<p>You will start asking yourself: can I do better?</p>
<p>You will still give priority to writing code that is clean and can be maintained easily. But, you will not forget that, after making it work and making it right, you may need to <strong>make it fast</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[What is the essence of clean code?]]></title><description><![CDATA[When we hear the term “Clean Code”, we usually think about the well-known book that was written by Robert C. Martin (also known as Uncle Bob):

“Clean Code: A Handbook of Agile Software Craftsmanship” (2009)

Ever since this book was published, the t...]]></description><link>https://mariocervera.com/the-essence-of-clean-code</link><guid isPermaLink="true">https://mariocervera.com/the-essence-of-clean-code</guid><category><![CDATA[clean code]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[refactoring]]></category><category><![CDATA[Testing]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Mon, 16 Nov 2020 17:23:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1605225455238/Ir9ehVKfW.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we hear the term “Clean Code”, we usually think about the well-known book that was written by Robert C. Martin (also known as Uncle Bob):</p>
<blockquote>
<p>“Clean Code: A Handbook of Agile Software Craftsmanship” (2009)</p>
</blockquote>
<p>Ever since this book was published, the term “Clean Code” has become increasingly popular. Today, its meaning is strongly influenced by Uncle Bob's vision, but this does not mean that the term was never used before the book came out.</p>
<p>In my opinion, developers did use the term "Clean Code" before the book was published. And I also believe that developers naturally related the term to code that is easy to read and maintain.</p>
<p>Readability and maintainability are desirable properties of software; however, they do not give us a profound and unambiguous picture of what clean code really is. This is why I want to dig deeper. I want to discuss the essence of clean code beyond its basic properties of readability and maintainability.</p>
<p>If you look at a deeper level, you will see that clean code is subjective. For example, you can consider a fragment of C++ code as unreadable because you are not familiar with the syntax, but experienced C++ programmers may disagree.</p>
<p>Uncle Bob recognizes the subjectivity of clean code in his book:</p>
<blockquote>
<p>“There are probably as many definitions as there are programmers“.</p>
</blockquote>
<p>This statement sparked my interest in writing this blog post. I want to share what clean code means <strong>to me</strong>. I want to answer the question “What is the essence of clean code?” even if there are plenty of experts that have already answered the question much better than I will ever do.</p>
<h3 id="1-clean-code-reads-like-a-good-novel">1. Clean code reads like a good novel.</h3>
<p>When code is clean, you should be able to sit in a comfortable couch, by the fire, with a good drink and dim lights to enjoy the code like you would enjoy your favorite novel.</p>
<p>You might think this is an exaggeration, and you would be right, but to a lesser extent than you may imagine. Reading clean code should definitely be enjoyable, because names are intent-revealing and tell you a story; because the flow is trivial; because statements are so straightforward that they glide through your eye; because the amount of gray cells that you have to engage is comfortably low. </p>
<blockquote>
<p>Clean code tells you a story that is captivating and easy to follow.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605224627517/2EGoh4afq.png" alt="CleanTest.png" /></p>
<h3 id="2-clean-code-is-simple">2. Clean code is simple.</h3>
<p>Clean code is so simple that it does not make the author look smart. And yet, it is obvious that the code was written by someone who put effort in it.</p>
<p>Because simple is not easy.</p>
<p>You don’t achieve simplicity on your first try. First, you make the code work, ignoring best practices if necessary. Then, you refactor so that the code is readable and maintainable.</p>
<blockquote>
<p>First, you make it work. Then, you make it right ~ Kent Beck.</p>
</blockquote>
<h3 id="3-clean-code-is-tested">3. Clean code is tested.</h3>
<p>A corollary of the previous section is that you can’t write clean code without refactoring. And, to refactor successfully, you need automated tests to guarantee that behavior does not change. Therefore, you need tests to write clean code.</p>
<p>Furthermore, automated tests are the way code remains clean.</p>
<p>It does not matter how clean the code is today. If it has no tests, you can't refactor confidently. Therefore, the code will become unclean because code tends to get more convoluted and coupled over time.</p>
<p>Refactoring and testing help you counteract this tendency.</p>
<blockquote>
<p>The natural tendency of code is not towards cleanness. It is the opposite. Counteracting this tendency requires explicit action.</p>
</blockquote>
<h3 id="4-clean-code-is-focused">4. Clean code is focused.</h3>
<p>Clean code does one thing and it does it well, in a few words.</p>
<p>The writer does not overload the reader with unnecessary details. The intent is clear. There are no ambiguities. It doesn't have surprises and unintended side effects.</p>
<p>If you call "isPrinterReady”, you know the function will only check if the printer is ready. It will not inadvertently remove a file.</p>
<blockquote>
<p>Clean code does what it says, without unexpected twists.</p>
</blockquote>
<h3 id="5-clean-code-does-not-repeat-itself">5. Clean code does not repeat itself.</h3>
<p>Clean code says everything once.</p>
<p>This does not mean that duplication is eliminated blindly. Clean code avoids premature abstractions and it knows that, if two identical pieces of code represent different knowledge, removing duplication introduces risk.</p>
<blockquote>
<p>Clean code follows the DRY principle, but it acknowledges that DRY is about knowledge duplication, not code.</p>
</blockquote>
<h3 id="6-clean-code-speaks-about-the-problem-not-the-solution">6. Clean code speaks about the problem, not the solution.</h3>
<p>If a name in your code includes a “computerish” term (such as “DTO”), it is probably focusing on “how”.</p>
<p>Clean code focuses on “what”.</p>
<p>Clean code uses terms that focus on the problem domain, not on the specific solution on a computer.</p>
<p>And these terms are at the right level of abstraction. If a software module is in the domain layer, the code will use terms of the domain model. If a module is in the database layer, it will speak about databases.</p>
<blockquote>
<p>Clean code uses the right level of abstraction to talk about the problem being solved.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605224936681/FGHGtghUB.png" alt="ProblemVsSolution.png" /></p>
<h3 id="7-clean-code-pays-close-attention-to-details">7. Clean code pays close attention to details.</h3>
<p>When you write clean code, you get details right and you do not make arbitrary decisions. If you declare a protected field, you know why you don't declare it private. If you declare a dynamic array, you know why a static array would not be better. Every detail counts.</p>
<p>In clean code, error codes and exceptions are meaningful; error handling is explicit; names are consistent; there are no memory leaks; etc.</p>
<blockquote>
<p>In software development, details matter. Clean code recognizes this truth.</p>
</blockquote>
<h3 id="8-clean-code-does-not-smell">8. Clean code does not smell.</h3>
<p>When you see code that smells bad, it is probably wise to refactor because design smells are often symptoms of deep quality problems.</p>
<p>Clean code does not smell, or, if it does, the odor is weak.</p>
<ul>
<li>Clean code is not rigid. It is easy to change.</li>
<li>Clean code is not immobile. You can reuse it easily.</li>
<li>Clean code is not opaque. The intent is easy to understand.</li>
<li>Clean code is not fragile. You can change it without introducing errors.</li>
</ul>
<blockquote>
<p>If it stinks, change it ~ Kent Beck.</p>
</blockquote>
<h1 id="conclusion">Conclusion</h1>
<p>After (hopefully) getting a deeper understanding of what clean code is, you may be wondering: why would I want to write clean code?</p>
<p>The main reason for me is that code is read far more times than it is written. Therefore, it is inefficient to favor solutions that make writing fast at the expense of making reading slow.</p>
<p>For example, it may be tempting to add a method to an interface only because you need to call the method and you hold a reference to the interface, but this can make the interface less cohesive and harder to understand.</p>
<p>Always consider the consequences of your actions. Do not take steps back in your journey towards clean code.</p>
<blockquote>
<p>The only way to go fast is to go well ~ Uncle Bob.</p>
</blockquote>
<h4 id="my-humble-advice-for-software-companies">My humble advice for software companies:</h4>
<blockquote>
<p>Stop looking for experts in paint brushes and focus your efforts on finding good artists. Focus your efforts on finding software engineers that can write clean code. This is how you go fast and at a sustainable pace. You must pay attention to technical excellence if you want to <strong>be agile</strong>. You can't be agile if you write dirty code.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1605225419131/cyMx7-ene.png" alt="SoftwareCompanies.png" /></p>
]]></content:encoded></item><item><title><![CDATA[7 non-obvious benefits of automated testing]]></title><description><![CDATA[I started exploring the fascinating world of test automation seven years ago. Right from the start, it was clear to me that testing has important benefits. Anywhere I read back then, I found people describing how testing leads to savings in developme...]]></description><link>https://mariocervera.com/non-obvious-benefits-automated-testing</link><guid isPermaLink="true">https://mariocervera.com/non-obvious-benefits-automated-testing</guid><category><![CDATA[Software Testing]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[clean code]]></category><category><![CDATA[design and architecture]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Mario Cervera]]></dc:creator><pubDate>Mon, 26 Oct 2020 16:14:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1603668467760/4GyjF2gAa.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I started exploring the fascinating world of test automation seven years ago. Right from the start, it was clear to me that testing has important benefits. Anywhere I read back then, I found people describing how testing leads to savings in development time and more robust software, among other things.</p>
<p>However, after all of these years of practice, I have learned some benefits that were less obvious to me when I started.</p>
<p>In this post, I share them.</p>
<p>Maybe, when you read this post, you think that these benefits are basic and well-known. That’s fair. My claim is just that they were not obvious to me when I was writing my first automated tests.</p>
<p>My lack of awareness at that time is the main motivation behind this post. If I can make only one person discover these benefits quicker than I did, then my goal will have been achieved. If I can convince only one unconvinced person of the usefulness of testing, then this post will have gone far beyond my initial expectations.</p>
<hr />
<h3 id="1-tests-give-you-code-samples">1. Tests give you code samples.</h3>
<p>Have you ever skipped an answer in StackOverflow because it didn't contain a code sample?</p>
<p>We look for code samples because they help us understand how things work.</p>
<p>Automated tests play the role of code samples. Each test represents an example of how the system is used at the code level; therefore, they are of invaluable help when we are trying to understand the system better.</p>
<blockquote>
<p>A software system is easier to understand if it has automated tests in place.</p>
</blockquote>
<hr />
<h3 id="2-tests-give-you-executable-specifications">2. Tests give you executable specifications.</h3>
<p>Written documents become obsolete easily, so they often lie. They specify what the system is supposed to do, not what it really does.</p>
<p>The only truth about the system behavior is in the source code. The code brings this behavior to life and makes it possible; the tests specify, document, and enforce it, in a formal and unambiguous way. Therefore, automated tests are executable specifications of the behavior of the system. If this behavior changes, the tests will fail. If we want to change this behavior, we must adapt the tests.</p>
<blockquote>
<p>Tests specify how the system actually behaves and they are always up to date.</p>
</blockquote>
<hr />
<h3 id="3-tests-give-you-the-first-users-of-your-code">3. Tests give you the first users of your code.</h3>
<p>When you are writing tests, you become a user of your own code. If the code is bad, you are the first to experience the problem. This makes you put more effort in refactoring, and in writing clean code and clean tests.</p>
<p>When you want your tests to be clean, you will choose better names for your variables, functions and classes. This improves the readability of your tests, which, in turn, improves the readability and design of your code, which, in turn, makes testing easier. It is a self-reinforcing loop.</p>
<blockquote>
<p>A good testing strategy leads to cleaner code.</p>
</blockquote>
<hr />
<h3 id="4-tests-give-you-immediate-feedback-about-code-changes">4. Tests give you immediate feedback about code changes.</h3>
<p>Have you ever felt the pleasure of misspelling a variable name and getting immediate feedback from your IDE?</p>
<p>Your IDE warns you about this kind of errors because they are syntactic issues that can be detected via static analysis at compile time. But, you can get the same kind of feedback about runtime errors. You only need fast tests that you can run after every change in your code.</p>
<p>For example, if you replace “+” with “-” by mistake, you will change the semantics of the code and therefore its runtime behavior. If the tests are semantically stable, they will detect the error immediately.</p>
<blockquote>
<p>Automated tests shorten the feedback loop on coding decisions.</p>
</blockquote>
<hr />
<h3 id="5-tests-prevent-the-occurrence-of-bugs">5. Tests prevent the occurrence of bugs.</h3>
<p>Having quick feedback on coding decisions has a significant consequence: bug prevention.</p>
<p>Tests specify the behavior of the system. As long as the tests are there, this behavior is preserved. If, sometime in the future, you change this behavior unintentionally, the tests will catch the error. This error, which might have been detected months later, has a lifetime of less than a few seconds.</p>
<blockquote>
<p>Tests help you detect errors early, when they are cheapest to fix.</p>
</blockquote>
<hr />
<h3 id="6-tests-give-you-a-safety-net">6. Tests give you a safety net.</h3>
<p>With a good suite of tests in place, you can modify code, run the tests, and immediately know whether you altered the system behavior. In other words, you can modify code safely.</p>
<p>This is where the safety net metaphor comes from. Automated tests act as a safety net that allows us to refactor confidently, in a similar way to trapeze artists to perform without fear of being hurt. But, nets can contain holes through which we may fall. These holes take the form of untested behaviors, where potential bugs may hide.</p>
<blockquote>
<p>Automated tests protect us from unexpected, and potentially harmful, events. But, test suites must be comprehensive, if we want them to meet this goal effectively.</p>
</blockquote>
<hr />
<h3 id="7-tests-have-positive-architectural-implications">7. Tests have positive architectural implications.</h3>
<p>To test a unit of code properly, we must isolate it from its dependencies. This is typically accomplished through the use of test doubles, also known as mocks.</p>
<p>When the code is highly coupled and dependencies are hard-coded, isolation becomes difficult and testing sometimes prohibitive. This implies that, the more loosely coupled a system is, the more of it can be verified in terms of unit tests. In other words, the lower the coupling, the higher the testability.</p>
<blockquote>
<p>Automated tests lead to low coupling and higher-quality design.</p>
</blockquote>
<hr />
<h1 id="conclusion">Conclusion</h1>
<p>If we consider all of these benefits together, we can reach an important conclusion:</p>
<p><strong>Automated testing dramatically increases software quality.</strong></p>
<p>And high quality is the only way we can have:</p>
<ul>
<li>Ability of respond to change quickly.</li>
<li>Huge savings in time and money.</li>
<li>High customer satisfaction.</li>
<li>A motivated development team.</li>
</ul>
<p>In this new era of uncertainty that we are currently living, all of these advantages may be more necessary than ever before.</p>
]]></content:encoded></item></channel></rss>