Tests are written to assure the correctness and quality of the solution they examine. Engineers write tests at different stages of the development cycle, starting at unit tests up to e2e tests. In fact, for every line of production code, multiple lines of test code is written.
Writing tests is no different from writing production code. In order to keep it running correctly and assist in detecting issues, it needs to be written in a way that can stand the test of time, provide the needed information on failures and be maintained for the project lifetime.
This talk presents test best practices which can be applied at all test stages. The practices are focused to help write good tests and provide the needed information for debugging and troubleshooting the issues detected.
While each language and test framework may present different properties and challenges, the practices are agnostic to a specific language/tool.
Examples will be given in Golang using Ginkgo/Gomega and in Python using PyTest.
The talk will cover: - Test structure - Test isolation - Test fixtures vs test body - Assertion - Traceability - Shared resources - Dead test (code) - Skipping, xfailing or not running - Parallel tests
The talk is based on the following blog post: https://ehaas.net/blog/tests-best-practices/
Edward Haas is a software engineer in the CNV and RHV groups @RedHat, specializing in the network domain. Previously engaged in the development of networking solutions that aimed to accelerate and optimize network traffic, utilizing tools like DPDK. A believer in clean code and an... Read More →
Thursday February 18, 2021 3:30pm - 3:55pm CET
Session Room 3
Red Hat contributes to stabilizing the upstream linux kernel using its Enterprise class hardware. For many years, this hardware has been managed through Beaker, a software for managing and automating labs of test computers. Beaker is used often, especially when testing different architectures and special hardware.
However, in some cases, for example when generic x86_64 devices suffice, it is easier to take advantage of stable infrastructure of different providers. As providers appear, evolve their services and add non x86_64 hardware, it is suddenly important to be able to target these providers for running tasks and scaling-out testing.
Once a system is up, a testing harness called restraint allows a relatively low-level and lightweight way to ensure tasks are executed. Beaker has used this harness for some time. It fulfills important requirements for kernel testing, such as being reliable, being able to handle machine reboots and more.
Because of CKI (Continuous Kernel Integration) team's need to scale-out to different providers, I've written UPT project (Unified Provisioning Tool) to simplify provisioning and Restraint Test Runner take advantage of restraint for (kernel and non-kernel) test running.
This talk will briefly discuss current approach towards running (primarily kernel) tests and targeting different cloud providers.
This talk will discuss the aforementioned tools, their capabilities and features to execute tasks, run tests, simplify test result interpretation, provide partial test results and handle unexpected behavior.
As different tools in the open source community evolve as well, we will also discuss possible cooperation, as this is already on the radar.
We encourage you to share and invite people who might be interested; this talk is suitable for anyone in kernel testing/tools, CI and related topics.
Robot Framework is a generic open-source, Python-based, widely used automation framework. It is an extensible keyword-driven automation framework, useful for acceptance testing, acceptance test-driven development (ATDD), behavior-driven development (BDD), and robotic process automation (RPA). It can be used in distributed, heterogeneous environments, where automation requires using different technologies and interfaces. In the Robot Framework and all of its libraries, they really do a lot to meet the versatile product requirements.
In case to meet specific functionality that does not handle in the existing libraries of robot framework, customizing robot framework is the best solution. This proposal will brighten the idea of how to use the Robot framework to make automation possible for any product. It will cover the generic design to extend the Robot framework which will be applicable for any custom product. The concept of extending the existing framework will support non-existing functionality from a wide range of products and provides the feasibility of test automation in any project. The demo example will add a practical implementation of the presented design. Also, this demo will cover how to create a project-specific framework using the Robot framework.
The talk will initially cover the revisionary introduction of the robot framework however detailed basics of the Robot framework will be out of scope. The major focus will be on the understanding of how to design and implement robot framework customization, to make the test automation feasible for any custom product.
I am based out of Redhat Pune as a software quality engg. In Red Hat, I have worked on projects such as REDHAT GLUSTER STORAGE and am currently working on OPENSHIFT CONTAINER STORAGE. I am having 2 years of experience in VM kernel development and 8 years of experience in automation... Read More →
Saturday February 20, 2021 11:30am - 12:10pm CET
Session Room 4
The Testing Farm Team has been fighting an unreliable testing infrastructure for years. Last year we got the possibility of a failover from our internal OpenStack cloud to the public cloud (AWS). In this session, we will guide you through our journey of adding a transparent failover functionality to our CI pipeline, with AWS complementing an OpenStack tenant. We will look at the requirements on the AWS cloud in terms of connecting it to the internal infrastructure and at the main features of our open-source provisioner Artemis, which made this failover possible: a programmable routing mechanism. In addition to the actual provisioning, Artemis also shields us from short-lasting infrastructure outages, which previously caused irrecoverable failures in our CI pipeline. In the end, we would like to show you how you can take advantage of Artemis for your own benefit and share the interim plans of adding support for additional public clouds, Beaker, and more features planned by the team.
openQA is an integration testing framework for whole operating systems. It can perform the same actions that a human would perform when interacting with the system under test, only in an automated fashion. This can be leveraged to continuously test repetitive tasks, like the installer of a Linux distribution. It is used extensively by both the Fedora and openSUSE distributions to continuously test their latest images. This talk will introduce openQA, its basic concepts, where it is applicable and showcase some more advanced features (like testing on bare metal hardware). It is aimed at beginners and will show you whether it is applicable for your use case while also providing some instructions how to start using it.
Dan joined SUSE to work on development tools as part of the developer engagement program, after working on embedded devices. Currently he is maintaining the openSUSE vagrant boxes, vagrant and creates the Open Build Service Connector, an extension for Visual Studio Code that integrates... Read More →
Saturday February 20, 2021 12:45pm - 1:25pm CET
Session Room 4
Many modern Web applications use API schemas to describe their contracts. But, the presence of a schema doesn't mean that the real application works as defined in the schema. There are many reasons for that - from the fundamental inability to describe the application with the chosen schema spec to the ubiquitous human factor. There are many consequences from this problem, and the application crash is the least dangerous of them. I will talk about Schemathesis - a tool that helps to solve many of these problems with property-based testing. We'll go through typical use-cases and talk about stateful testing - an approach that allows you to generate whole sequences of API calls automatically. You'll learn how to test API schemas with minimal efforts and create effective test scenarios that will make your applications more reliable. If you are interested in the practical usage of property-based testing and how to implement it in real-life projects, I am keen to see you at the session!