2016.10.18 / industry
Matt Lacey
KNOWORTH
只有想不到、没有做不到
Most developers I meet aren't big fans of testing. A few are but most don't do it, would rather not do it, or only do it begrudgingly. I love testing and happily spend more time on tests than writing new code. I'd argue that it's because of the focus on testing that I can spend less time writing new code or fixing bugs and still be very productive.
If you're unsure of writing tests or don't do a lot of it, the following will point you in a better direction.
1.Not doing it.
It's an easy trap to fall into but one without an excuse. Make plans to start adding tests to the code you're working on now and add them to future projects from the start.
2.Not starting testing from the beginning of a project.
It's harder to go back and add them in retrospectively and may require architecture changes to do so which will ultimately take longer for you to have code you can be confident in. Adding tests from the start saves time and effort over the lifetime of a project.
3.Writing failing tests.
The popularity of the TDD methodology has brought the idea of Red-Green-Refactor to the software testing world. The is commonly misunderstood to mean that you should "start by writing a failing test." This is not the case. The purpose of creating a test before you write the code is to define what the correct behavior of the system should be. In many cases, this will be a failing test (indicated in red) but it may be that this is represented by an inconclusive or unimplemented test.
4.Being afraid of unimplemented tests.
A big problem in software development is the separation between code and any documentation about what the system should actually do. By having a test with a name that clearly defines the intended behavior that you will eventually implement, you will get some value from a test even if how it will be written is currently unknown.
5.Not naming the tests well.
Naming things in software is famously difficult to do well and this applies to tests as well. There are several popular conventions on how to name tests. The one you use isn't important as long as it's used consistently and accurately describes what is being tested.
6.Having tests that do too much.
Long complicated names are a good indication that you're trying to test more than one thing at once. An individual test should only test a single thing. If it fails it should give a clear indication of what went wrong in the code. You should not need to look at which part of the test failed to see what the problem in the code is. This doesn't mean that you should never have multiple asserts in a test but that they should be tightly related. For instance, it's ok to have a test that looks at the output of an order processing system and verify that there is a single line item in it and it contains a specific item. It's not ok to have a single test that verifies that the output of the same system creates a specific item and it's logged to the database and it also sends a confirmation email.
7.Not actually testing the code.
It's common to see people who are new to testing creating overly complicated mocks and setup procedures that don't end up testing the actual code. They might verify that the mock code is correct or that the mock code does the same as the real code or just execute the code without ever asserting anything. Such "tests" are a waste of effort, especially if they exist to only boost the level of code coverage.
8.Worrying about code coverage.
The idea of code coverage is noble but often has limited actual value. To know how much of the code is executed when the tests are run should be useful but because it doesn't consider the quality of the tests that are executing the code it can be meaningless. Code coverage is only interesting if it is very high or very low. If very high it suggests that more of the code is probably being tested than will bring value. Very low code coverage suggests that there's probably not enough tests for the code. With this ambiguity, some people struggle to know if an individual piece of code should be tested. I use a simple question to determine this: Does the code contain non-trivial complexity? If it does then you need some tests. If it doesn't then you don't. Testing property accessors is a waste of time. If they fail there's something more fundamentally wrong with your code system than the code you're writing. If you can't look at a piece of code and instantly see everything it does then it's non-trivial. This doesn't just apply to code as you write it. If we're revisiting code at any point after it's been written, then it needs tests. If a bug is ever found in existing code, that's confirmation that there weren't sufficient tests for the complexity of that area of the code.
9.Focusing on just one type of testing.
Once you do start testing it can be easy to get drawn into just one style of tests. This is a mistake. You can't adequately test all parts of a system with one type of tests. You need unit tests to confirm individual components of the code work correctly. You need integration tests to confirm that different components work together. You need automated UI tests to verify the software can be used as it's intended. Finally you need manual tests for any parts that can't be easily automated and for exploratory testing.
10.Focusing on short-term tests.
The majority of the value from tests is obtained over time. Tests shouldn't just exist to verify that something has been written correctly but that it continues to function correctly as time passes and other changes are made to the codebase. Be they regression errors or new exceptions, tests should be repeatedly run to detect problems as early as possible as that will mean they are quicker, cheaper and easier to fix. Having tests that can be automated and executed quickly, without variation (human error), is why coded tests are so valuable.
11.Being a developer relying on someone else to run (or write) the tests.
Tests have very little value if not run. If tests can't be run then they won't be, and bugs that could have been caught will be missed. Having as many tests run automatically (as part of a continuous integration system) is a start but anyone on a project should be able to run any tests at any time. If you need special setup, machines, permissions, or configurations to run tests, these will only serve as barriers to the tests being executed. Developers need to be able to run tests before they check in code, so they need access to and the ability to run all relevant tests. Code and tests should be kept in the same place and any setup needed should be scripted. One of the worst examples I've seen of this being done badly was on a project where a sub-team of testers would periodically take a copy of the code the developers were working on, they'd modify the code so they could execute a series of tests that developers didn’t have access to on a specially configured (an undocumented) machine and then send a single large email to all developers indicating any issues they'd found. Not only is this a bad way to test but it's a bad way to work as a team. Do not do this. Having code that executes correctly is part of what it means to be a professional developer. The way to guarantee the accuracy of the code you write is with appropriate tests that accompany it. You cannot be a professional developer and rely solely on other people to write tests for and run tests on your code.
If none of the above apply to you congratulations. Carry on making robust, valuable software.
If some of the above do apply to you, now's a great time to start doing something about it.
2025-04-28 / Other
From the food we eat and the way we travel to the way we spend our money, mobile apps have changed the way we interact with the world around us. Since the launch of Apples app store back in 2008, developers have built a huge number of tools and solutions to make consumers lives easier. In 2015, the number of apps in the major stores looks a little like this: Fig 1. Apps in the major app stores Source: Statista. 2015 However, this massive growth hasnt been mirrored in the enterprise IT market. Although there is a growing number of companies building their own internal apps to facilitate corpora[…]
by Josh Anderson
2025-04-28 / industry
I would like toclarify something immediately with this post. Its title does not contain the number 7, nor does it talk about effectiveness. That was intentional. I have no interest in trying to piggy-back on StephenCoveys book title to earn clicks, which would make this post a dime a dozen. In fact, a google search of good habits for programmersyields just such an appropriation, and it also yields exactly the sorts of articles and posts that you might expect. They have some number and they talk about what makes programmers good at programming. But Id like to focus on a slightly different angle[…]
by Erik Dietrich
2025-04-28 / Technology research
Prior to iOS9, you could only use spotlight to find apps by their name. With the announcement of the new iOS9 Search APIs, Apple now allow developers to choose what content from their apps they want to index, as well as how the results appear in spotlight, and what happens when the user taps one of the results. The 3 APIs NSUserActivity The NSUserActivity API was introduced in iOS8 for Handoff, but iOS9 now allows activities to be searchable. You can now provide metadata to these activities, meaning that spotlight can index them. This acts as a history stack, similar to when you are browsing th[…]
by Chris Grant