There's an implicit problem in automated test execution in a build pipeline: the number of tests keeps increasing. If tests are written well (i.e., properly decoupled) this shouldn't be a big problem. Each new test should only add a minute fraction of a second to the build. The sad reality, however, is that not all tests are good and, well, even if they were, the duration of a build is going to keep increasing.
Build performance tuning has become a part of daily life on many development teams, and there are many options. Parallelization can lead to big wins, as can all sorts of environmental optimization, but it is a constant battle against growth in a code base. It pays to revisit our assumptions.
Since the early days of Agile software development, there has been an unspoken assumption that a build should run all of the tests. It makes sense. If automated tests are a standard of expected behavior for a project, running them all gives us a solid, easily articulated statement about our code quality. It also assures us that the whole of our system is working to a particular standard, i.e., we are testing the real thing that will be deployed and we are testing it with our full automated arsenal.
Since then, people have tried a variety of half-way options. One common pattern is to run all unit tests at each build along with a series of "smoke tests" to give developers quick feedback on check-in. Integration tests run later. This is never ideal, but it is a pattern that many teams with large tests suites often fall into.
Some organizations have experimented with smarter builds. If you run coverage on your code through your test suite, you can build up a map of "tests that can possibly fail" when particular areas of code are touched. It's computationally intensive, but it is doable if you have plenty of compute power around and rerun periodically to update the map.
There is alternative, though. We can use prior experience to maximize the speed of feedback that we get from a build. Kent Beck experimented in this area years ago with his JUnitMax project. It took recently failed tests and pushed them forward in the build so that they ran very early. I've been experimenting with a variation of that: build a map of test failures to files which were modified in the commit where the failures occurred. Then, on every new commit, take the set-union of the set of tests that have ever failed when the files in the commit have been touched along with the set of all tests recently introduced, and run them as the build. My theory (which I have not been able to verify yet) is that for many projects this process may converge in such a way that nearly all of the failures that occur for a full build will occur for this abbreviated build. If that's the case, it could give developers a strong early sense of "done", allowing any errors discovered in a full build to be handled in a bug reporting process.
If it works, you essentially have a build which foretells failures in the future based on failures in the past: a precognitive build.