TAPPER Manual


Next: , Previous: (dir), Up: (dir)

Table of Contents


Previous: Preconditions, Up: Top

1 Synopsis

TAPPER is an infrastructure that consists of 3 important layers:

The layers work completely autonomously, though can also be connected together.

To fully exploit the system the tasks you need to learn are

Person in charge: Steffen Schwigon


Next: , Previous: Top, Up: Top

2 Technical Infrastructure

2.1 Adding a new host into automation

This chapter describes what you need to do in order to get a new machine into the Tapper test rotation.

2.1.1 Make machine remote restartable

In the osrc network this means attaching it to osrc_rst which is the reset switch tool, a physical device plus the software to trigger the reset.

Person in charge: Jan Krocker

2.1.2 Make machine PXE boot aware

Person in charge: Maik Hentsche

2.1.3 Add host to the hardware database

If not already listed at http://bancroft.amd.com/hardwaredb/ contact Jan Krocker.

Person in charge: Jan Krocker

2.1.4 Optionally: enable ‘temare’ to generate tests for this host

The steps until here are generally enough to put ‘preconditions’ for this host into the Tapper database and thus use the host for tests.

Anyway, you can additionally register the host in ‘temare’.

temare is the Test Matrix Replacement program that schedules tests according to our test plan. If you want tests scheduled for the new machine then follow these steps:

Person in charge: Maik Hentsche, Frank Arnold


Next: , Previous: Preconditions, Up: Top

3 Test Protocol

3.1 Test Anything Protocol (TAP)

3.2 Tutorial

3.2.1 Just plan and success

Example:

      1..3
      ok
      ok
      not ok

3.2.2 Succession numbers

Example:

     1..3
     ok 1
     ok 2
     not ok 3

3.2.3 Test descriptions

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line

3.2.4 Mark tests as TODO

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line # TODO

3.2.5 Comment TODO tests with reason

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line # TODO just specced

3.2.6 Mark tests as SKIP (with reason)

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     ok 3 - last line # SKIP missing prerequisites

3.2.7 Diagnostics

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line # TODO just specced
     # Failed test 'last line'
     # at t/data_dpath.t line 410.
     # got: 'foo'
     # expected: 'bar'

3.2.8 YAML Diagnostics

Example:

     1..3
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line # TODO just specced
       ---
       message: Failed test 'last line' at t/data_dpath.t line 410.
       severity: fail
       data:
         got: 'foo'
         expect: 'bar'
       ...

3.2.9 Headers for TAPPER

Example:

     1..3
     # Tapper-Suite-Name: Foo-Bar
     # Tapper-Suite-Version: 2.010013
     ok 1 - input file opened
     ok 2 - file content
     not ok 3 - last line # TODO just specced

These are the headers that apply to the whole report:

      # Tapper-suite-name:                 -- suite name
      # Tapper-suite-version:              -- suite version
      # Tapper-machine-name:               -- machine/host name
      # Tapper-machine-description:        -- more details to machine
      # Tapper-starttime-test-program:     -- start time for complete test (including guests)
      # Tapper-endtime-test-program:       -- end time for complete test (including guests)

3.2.10 Sections for TAPPER

Example:

     1..2
     # Tapper-section: arithmetics
     ok 1 add
     ok 2 multiply
     1..1
     # Tapper-section: string handling
     ok 1 concat
     1..3
     # Tapper-section: benchmarks
     ok 1
     ok 2
     ok 3

These are the headers that apply to single sections:

      # Tapper-ram:                      -- memory
      # Tapper-cpuinfo:                  -- what CPU
      # Tapper-uname:                    -- kernel information
      # Tapper-osname:                   -- OS information
      # Tapper-uptime:                   -- uptime, maybe the test run time
      # Tapper-language-description:     -- for Software tests, like "Perl 5.10", "Python 2.5"
      # Tapper-xen-version:              -- Xen version
      # Tapper-xen-changeset:            -- particular Xen changeset
      # Tapper-xen-dom0-kernel:          -- the kernel version of the dom0
      # Tapper-xen-base-os-description:  -- more verbose OS information
      # Tapper-xen-guest-description:    -- description of a guest
      # Tapper-xen-guest-test:           -- the started test program
      # Tapper-xen-guest-start:          -- start time of test
      # Tapper-xen-guest-flags:          -- flags used for starting the guest
      # Tapper-kvm-module-version:       -- version of KVM kernel module
      # Tapper-kvm-userspace-version:    -- version of KVM userland tools
      # Tapper-kvm-kernel:               -- version of kernel
      # Tapper-kvm-base-os-description:  -- more verbose OS information
      # Tapper-kvm-guest-description:    -- description of a guest
      # Tapper-kvm-guest-test:           -- the started test program
      # Tapper-kvm-guest-start:          -- start time of test
      # Tapper-kvm-guest-flags:          -- flags used for starting the guest
      # Tapper-flags:                    -- Flags that were used to boot the OS
      # Tapper-reportcomment:            -- Freestyle comment

3.2.11 Developing with TAP

     $ prove t/*.t
     t/00-load.........ok
     t/boilerplate.....ok
     t/pod-coverage....ok
     All tests successful.
     Files=4, Tests=6, 0 wallclock secs
     ( 0.05 usr 0.00 sys + 0.28 cusr 0.05 csys = 0.38 CPU)
     Result: PASS

3.2.12 TAP tips

3.3 Special Tapper headers inside TAP

3.4 Particular use-cases

3.4.1 Report Groups

3.4.1.1 Report grouping by same testrun

If we have a Xen environment then there are many guests each running some test suites but they don't know of each other.

The only thing that combines them is a common testrun-id. If each suite just reports this testrun-id as the group id, then the receiving side can combine all those autonomously reporting suites back together by that id.

So simply each suite should output

     # Tapper-reportgroup-testrun: 1234

with 1234 being a testrun ID that is available via the environment variable $TAPPER_TESTRUN. This variable is provided by the automation layer.

3.4.1.2 Report grouping by arbitrary idenitifier

If the grouping id is not a testrun id, e.g., because you have set up a Xen environment without the TAPPER automation layer, then generate one random value once in dom0 by yourself and use that same value inside all guests with the following header:

How that value gets from dom0 into the guests is left as an exercise, e.g. via preparing the init scripts in the guest images before starting them. That's not the problem of the test suite wrappers, they should only evaluate the environment variable TAPPER_REPORT_GROUP.

Person in charge: Frank Becker


Next: , Previous: Test Protocol, Up: Top

4 Test Suite Wrappers

This section is about the test suites and wrappers around existing suites. These wrappers are part of our overall test infrastructure.

It's basically about the middle part in the following picture:

[[image:tapper_architecture_overview.png | 800px]]

We have wrappers for existing test and benchmark suites.

Wrappers just run the suites as a user would manually run them but additionally extract results and produce TAP (Test Anything Protocol).

We have some specialized, small test suites that complement the general suites, e.g. for extracting meta information or parsing logs for common problems.

If the environment variables

     TAPPER_REPORT_SERVER
     TAPPER_REPORT_PORT

are set the wrappers report their results by piping their TAP output there, else they print to STDOUT.

4.1 Available test suite wrappers

4.1.1 LMbench

tapper_testsuite_lmbench.sh
A wrapper around the benchmark suite LMbench.

See also http://www.bitmover.com/lmbench/.

4.1.2 kernbench

tapper_testsuite_kernbench.sh
A wrapper around the benchmark suite kernbench.

See also http://freshmeat.net/projects/kernbench/.

4.1.3 CTCS

tapper_testsuite_ctcs.sh
A wrapper around the Cerberus Test Control System (CTCS).

See also http://sourceforge.net/projects/va-ctcs/.

4.1.4 LTP

tapper_testsuite_ltp.sh
A wrapper around the Linux Test Project (LTP).

See also http://ltp.sourceforge.net/.

4.1.5 dom0-meta

tapper_testsuite_dom0_meta.sh
A suite that produces meta information about the dom0 environment.

4.2 Environment variables

The TAPPER automation layer provides some environment variables that the wrappers can use:

TAPPER_TESTRUN
Currently active Testrun ID.
TAPPER_SERVER
The controlling automation Server that initiated this testrun.
TAPPER_REPORT_SERVER
The target server to which the tests should report their results in TAP.
TAPPER_REPORT_PORT
The target port to which the tests should report their results in TAP. Complements TAPPER_REPORT_SERVER.
TAPPER_REPORT_API_PORT
The port on which the more sophisticated Remote Reports API is available. It's running on the same host as TAPPER_REPORT_SERVER.
TAPPER_TS_RUNTIME
Maximum runtime after which the testprogram will not be restarted when it runs in a loop. (This is a more passive variant than a timeout.)
TAPPER_GUEST_NUMBER
Virtualisation guests are ordered, this is the guest number or 0 if not a guest.
TAPPER_NTP_SERVER
The server where to request NTP dates from.

These variables should be used in the TAP of the suite as Tapper headers. Important use-case is "report groups", see next chapter.

Person in charge: Frank Becker


Next: , Previous: General, Up: Top

5 Preconditions

The central thing that is needed before a test is run is a so called precondition. Creating those preconditions is the main task needed to do when using the automation framework.

Most of the preconditions describe packages that need to be installed. Other preconditions describe how subdirs should be copied or scripts be executed.

A precondition can depend on other preconditions, leading to a tree of preconditions that will be installed from the leaves to the top.

5.1 SYNOPSIS

5.2 Precondition repository

5.2.1 Normal preconditions

We store preconditions in the database and assign testruns to them (also in the database).

Usually the preconditions were developed in a (temporary) file and then entered into the database with a tool. After that the temporary file can be deleted.

Preconditions can be kept in files to re-use them when creating testruns but that's not needed for archiving purposes, only for creation purposes.

5.2.2 Macro preconditions

Though, there is another mechanism on top of normal preconditions: Macro Preconditions. These allow to bundle several preconditions into a common use-case and mark placeholders in them, see section Macro Preconditions.

These macro preconditions should be archived, as they are only template files which are rendered into final preconditions. Only the final preconditions are stored in the database.

Macro preconditions can be stored in

     /data/bancroft/tapper/live/repository/macropreconditions/

5.2.3 Precondition types

Some preconditions types can contain other more simple precondition types. To distinguish them we call them Highlevel preconditions and Action preconditions, accordingly.

5.2.3.1 Action preconditions

The following action precondition types are allowed:

package
A package (kernel, library, etc.), of type .tar, .tar.gz or .tar.bz2
image
A complete OS image of type .iso, .tar.gz, .tgz, .tar, .tar.bz2
prc
Create a config for the PRC module of the automation layer.
copyfile
One file that can just be copied/rsync'd
installer_stop
Don't reboot machine after system installer finished
grub
Overwrite automatically generated grub config with one provided by the tester
fstab
Append a line to /etc/fstab
repository
Fetch data from a git, hg or svn repository
exec
Execute a script during installation phase
reboot
Requests a reboot test and states how often to reboot.
5.2.3.2 Highlevel preconditions

Currently only the following high level precondition type is allowed:

virt
Generic description for Xen or KVM

High level preconditions both define stuff and can also contain other preconditions.

They are handled with some effort to Do The Right Thing, i.e., a defined root image in the high level precondition is always installed first. All other preconditions are installed in the order defined by its tree structure (depth-first).

5.2.4 Precondition description

We describe preconditions in YAML files (http://www.yaml.org/).

All preconditions have at least a key

     precondition_type: TYPE

and optionally

     name: VERBOSE DESCRIPTION
     shortname: SHORT DESCRIPTION

then the remaining keys depend on the TYPE.

5.2.4.1 installer_stop
5.2.4.3 package (OK)
5.2.4.4 copyfile (OK)
5.2.4.5 fstab (OK)
5.2.4.6 image (OK)

usually the root image that is unpacked to a partition (this is in contrast to a guest file that's just there)

5.2.4.7 repository (OK)
5.2.4.8 type: prc (OK)

Is typically contained implicitely with the abstract precondition virt. But can also be defined explicitely, e.g., for kernel tests.

Creates config for PRC. This config controls what is to be run and started when the machine boots.

5.2.4.12 General precondition keys “mountfile”

These 2 options are possible in each precondition. With that you can execute the precondition inside guest images:

     mountfile: ...
     mountpartition: ...
     mounttype: 

TODO is this the same as mountfile, mountpartition?

- 1. only mountfile: eg. rawimage, file loop-mounted - 2. only mountpartition: then mount that partition - 3. image file with partitions: mount the imagefile and from that only the given partition

Person in charge: Maik Hentsche

5.3 Macro Preconditions

This section describes macro precondition files as they are stored in /data/bancroft/tapper/live/repository/macropreconditions/.

A macro precondition is Perl code.

It contains exactly one hash ref.

The hashref must have a key preconditions pointing to an arrayref with strings in it. Each of these strings is a precondition which is preprocessed with Template-Toolkit.

The hash can contain a key mandatory_fields pointing to an arrayref of fieldnames that are validated when the macro precondition is avaluated.

Macro preconditions are not stored in the database. They are only a tool to ease the creation of preconditions. Only the resulting preconditions are stored in database.

Because the preconditions key contains just an array a macro precondition can only create a linear list of preconditions, not a tree (as it would be possible via pre_preconditions). Therefore you need to order them like a tree would have been walked.

The values for the placeholders can be filled via

     tapper-testrun new [all usual options] \
         --macroprecond=FILENAME \
          -DPLACEHOLDER1=VALUE1 \
          -DPLACEHOLDER2=VALUE2 \
          -DPLACEHOLDER3=VALUE3

The FILENAME is a complete filename with absolute path.

The format of a macro precondition is basically just a Perl hashref where the "preconditions" are just and array reference of strings:

     
      preconditions => [
                        'macro content 1',
                        'macro content 2',
                        'macro content 3',
                       ],
      mandatory_fields => [ $placeholders ],
     

which will get eval'ed in Perl.

You can quote the strings with Perl quote operators.

The string content of the preconditions can be any string with placeholders in Template-Toolkit syntax. Here is the same example more verbose with the two placeholders "image_file" and "xen_package" in it:

     
      preconditions => [
       '
     precondition: foobar1
     name: A nice description
     dom0:
       root:
         precondition_type: image
         mount: /
         image: [% image_file %]
         partition: /dev/sda2
       preconditions:
         - precondition_type: package
           filename: [% xen_package %]
           path: tapperutils/
           scripts: ~
       ',
       'macro content 2',
       'macro content 3',
      ],
      mandatory_fields => [ qw(image_file xen_package) ],
     

The appropriate testrun creation looks like this:

     tapper-testrun new ... \
         --macroprecond=FILENAME \
          -Dimage_file=suse/suse_sles10_64b_smp_raw.tar.gz \
          -Dxen_package=xen-3.2_20080116_1546_16718_f4a57e0474af__64bit.tar.gz

5.3.1 A real live example: kernel boot test

Person in charge: Steffen Schwigon


Next: , Previous: Preconditions, Up: Top

6 Web User Interface

The Web User Interface is a frontend to the Reports database. It allows to overview reports that came in from several machines, in several test suites.

It can filter the results by dates, machines or test suite, gives colorful (RED/YELLOW/GREEN) overview about success/failure ratios, allows to zoom into details of single reports.

To evaluate reported test results in a more programmatic way, have a look into the DPath Query Language that is part of the Reports::API.

6.1 Usage

The main URL is

     http://osrc.amd.com/tapper

TODO


Next: , Previous: Test Suite Wrappers, Up: Top

7 Reports::API

7.1 Overview

There runs yet another daemon, the so called Tapper::Reports::API, on the same host where already the TAP Receiver runs. This ‘Reports API’ is meant for everything that needs more than just dropping TAP reports to a port, e.g., some interactive dialog or parameters.

This Tapper::Reports::API listens on Port 7358. Its API is modeled after classic unix script look&feel with a first line containing a description how to interpret the rest of the lines.

The first line consists of a shebang (#!), a api command and command parameters. The rest of the file is the payload for the api command.

The syntax of the ‘command params’ varies depending on the ‘api command’ to make each command intuitively useable. Sometimes they are just positional parameters, sometimes they look like the start of a HERE document (i.e., they are prefixed with << as you can see below).

Person in charge: Steffen Schwigon

7.2 Raw API Commands

In this section the raw API is described. That's the way you can use without any dependencies except for the minimum ability to talk to a port, e.g., via netcat.

See section Client Utility tapper-api for how to use a dedicated command line utility that makes talking to the reports API easier, but is a dependency that might not be available in your personal test environment.

7.2.1 upload - attach a file to a report

This api command lets you upload files, aka. attachments, to reports. These files are available later through the web interface. Use this to attach log files, config files or console output.

7.2.1.1 Synopsis
     #! upload REPORTID FILENAME [ CONTENTTYPE ]
     payload
7.2.1.2 Parameters
7.2.1.3 Payload

The raw content of the file to upload.

7.2.1.4 Example usage

Just echo the first api-command line and then immediately cat the file content:

     $ ( echo "#! upload 552 xyz.tmp" ; cat xyz.tmp ) | netcat -w1 bascha 7358

7.2.2 mason - Render templates with embedded query language

To query report results we provide sending templates to the API in which you can use a query language to get report details: This api-command is called like the template engine so that we can provide other template engines as well.

7.2.2.1 Synopsis
     #! mason debug=0 <<ENDMARKER
     payload
     ENDMARKER
7.2.2.2 Parameters
7.2.2.3 Payload

A mason template.

Mason is a template language, see http://masonhq.com. Inside the template we provide a function reportdata to access report data via a query language. See section Reports Query Language for details about this.

7.2.2.4 Example usage

This is a raw Mason template:

     % my $world = "Mason World";
     Hello <% $world %>!
     % my @res = reportdata '{ "suite.name" => "perfmon" } :: //tap/tests_planned';
     Planned perfmon tests:
     % foreach (@res) 
        <% $_ %>
     % 

If you want to submit such a Mason template you can add the api-command line and the EOF marker like this:

     $ EOFMARKER="MASONTEMPLATE".$$
     $ payload_file="perfmon_tests_planned.mas"
     $ ( echo "#! mason <<$EOFMARKER" ; cat $payload_file ; echo "$EOFMARKER" ) \
         | netcat -w1 bascha 7358

The output of this is the rendered template. You can extend the line to save the rendered result into a file:

     $ ( echo "#! mason <<$EOFMARKER" ; cat $payload_file ; echo "$EOFMARKER" ) \
         | netcat -w1 bascha 7358 > result.txt

The answer for this looks like this:

     Hello Mason World!
     Planned perfmon tests:
        3
        4
        17

7.3 Query language DPath

The query language, which is the argument to the reportdata as used embedded in the ‘mason’ examples above:

      reportdata '{ "suite.name" => "perfmon" } :: //tap/tests_planned'

consists of 2 parts, divided by the ‘::’.

We call the first part in braces reports filter and the second part data filter.

7.3.1 Reports Filter (SQL::Abstract)

The reports filter selects which reports to look at. The expression inside the braces is actually a complete SQL::Abstract expression (http://search.cpan.org/~mstrout/SQL-Abstract/) working internally as a select in the context of the object relational mapper, which targets the table Report with an active JOIN to the table Suite.

All the matching reports are then taken to build a data structure for each one, consisting of the table data and the parsed TAP part which is turned into a data structure via TAP::DOM (http://search.cpan.org/~schwigon/TAP-DOM/).

The data filter works then on that data structure for each report.

7.3.1.1 SQL::Abstract expressions

The filter expressions are best described by example:

7.3.1.2 The data structure

7.3.2 Data Filter (Data::DPath)

The data structure that is created for each report can be evaluated using the data filter part of the query language, i.e., everything after the ::. This part is passed through to Data::DPath (http://search.cpan.org/~schwigon/Data-DPath/).

7.3.2.1 Data::DPath expressions

7.4 Client Utility tapper-api

There is a command line utility tapper-api that helps with using the API without the need to talk the protocol and fiddle with netcat by yourself.

7.4.1 help

You can aquire a help page to each sub command:

     $ /home/tapper/perl510/bin/tapper-api help upload

prints

     tapper-api upload --reportid=s --file=s [ --contenttype=s ]
        --verbose          some more informational output
        --reportid         INT; the testrun id to change
        --file             STRING; the file to upload, use '-' for STDIN
        --contenttype      STRING; content-type, default 'plain',
                           use 'application/octed-stream' for binaries

7.4.2 upload

Use it from the Tapper path, like:

     $ /home/tapper/perl510/bin/tapper-api upload \
       --file /var/log/messages \
       --reportid=301

You can also use the special filename - to read from STDIN, e.g., if you need to pipe the output of tools like dmesg:

     $ dmesg | /home/tapper/perl510/bin/tapper-api upload \
       --file=- \
       --filename dmesg \
       --reportid=301

7.4.3 mason

TODO


Next: , Previous: General, Up: Top

8 Complete Use-Cases

In this chapter we describe how the single features are put together into whole use-cases.

8.1 Automatic Xen testing

This is a description on how to run Xen tests with Tapper using SLES10 with one RHEL5.2 guest (64 bit) as an example.

The following mainly applies to manually assigning Xen tests. The SysInt team uses temare to automatically create the here described steps.

8.1.1 Paths

8.1.2 Choose an image for Dom0 and images for each guest

We use suse/suse_sles10_64b_smp_raw.tar.gz as Dom0 and


 osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img

as the only guest.

The SuSE image is of precondition type image. Thus its path is relative to /mnt/images which has bancroft:/data/bancroft/tapper/live/repository/images/ mounted.

The root partition is named in the section ‘root’ of the Xen precondition. Furthermore, you need to define the destination partition to be Dom0 root. We use /dev/sda2 as an example. The partition could also be named using its UUID or partition label. Thus you need to add the following to the dom0 part of the Xen precondition:


  root: 
    precondition_type: image
    mount: / 
    image: suse/suse_sles10_64b_smp_raw.tar.gz 
    partition: /dev/sda2

The RedHat image is of type ‘copyfile’.

It is copied from osko:/export/image_files/official_testing/raw_img/ which is mounted to /mnt/nfs before.

This mounting is done automatically because the protocol type nfs is given. The image file is copied to the destination named as dest in the ‘copyfile’ precondition. We use /xen/images/ as an example. To allow the System Installer to install preconditions into the guest image, the file to mount and the partition to mount need to be named. Note that even though in some cases, the mountfile can be determined automatically, in other cases this is not possible (e.g. when you get it from a tar.gz package). The resulting root secition for this guest is:

 
   root:
     precondition_type: copyfile
     name: osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img
     protocol: nfs
     dest: /xen/images/
     mountfile: /xen/images/redhat_rhel5u2_64b_smp_up_small_raw.img
     mountpartition: p1

8.1.3 PRC configuration

PRC (Program Run Control) is responsible for starting guests and test suites.

8.1.3.1 Guest Start Configuration

Making PRC able to start Xen guests is very simple. Every guest entry needs to have a section named "config". In this section, a precondition describing how the config file is installed and a filename have to be given. As for guest images the file name is needed because it can't be determined in some cases. We use 001.svm installed via copyfile to /xen/images/001.svm. The resulting config section is:

 
     config:
       precondition_type: copyfile
       name: /usr/share/tapper/packages/mhentsc3/001.svm
       protocol: local
       dest: /xen/images/
       filename: /xen/images/001.svm

8.1.3.2 Testsuite Configuration

You need to define, where you want which test suite to run. This can be done in every guest and the Dom0. In this example, the Dom0 and the single guest will run different testsuites. this chapter only describes the Dom0 test program. See the summary at the end for details on the guest test program.

The section testprogram consists of a precondition definition describing how the test suite is installed. In our example we use a precondition type package with a relative path name. This path is relative to ”'/data/bancroft/tapper/live/repository/packages/”'. Since ”'bancroft:/data/bancroft/”' is mounted to ”'/data/bancroft/”' in the install system, this directory can be accessed at ”'bancroft:/data/bancroft/tapper/live/repository/packages/”'.

Beside the precondition you need to define an execname which is the full path name of the file to be executed (remember, it can't be determined). This file is called in the root directory (”'/”') in the test system thus in case you need to use relative paths inside your test suite they need to be relative to this. The program may take parameters which are named in the optional array ”'parameters”' and taken as is. The parameter is ”'timeout_after_testprogram”' which allows you to define that your test suite shall be killed (and an error shall be reported) after that many seconds. Even though this parameter is optional, leaving it out will result in Tapper waiting forever if your test doesn't send finish messages. The resulting testprogram section looks like this:

 
   testprogram:
     precondition_type: package
     filename: tapper-testsuite-system.tar.gz
     path: mhentsc3/
     timeout_after_testprogram: ~
     execname: /opt/system/bin/tapper_testsuite_system.sh 
     parameters: 
       - --report

8.1.4 Preconditions

Usually your images will not have every software needed for your tests installed. In fact the example images now do but for the purpose of better explanation we assume that we need to install dhcp, python-xml and bridge-utils in Dom0. Furthermore we need a script to enable network and console. At last we install the Xen package and a Xen installer package. These two are still needed on our test images. Package preconditions may have a ”'scripts”' array attached that name a number of programs to be executed after the package was installed. This is used in our example to call the Xen installer script after the Xen package and the Xen installer package were installed. See the summary at the end for the resulting precondition section. The guest image only needs a DHCP client. Since this precondition is appended to the precondition list of the appropriate guest entry, the System Installer will automatically know that the guest image has to be mounted and the precondition needs to be installed inside relative to this mount.

8.1.5 Resulting YAML config

After all these informations are gathered, put the following YAML text into a file. We use /tmp/xen.yml as an example.

 
   precondition_type: xen
   name: SLES 10 Xen with RHEL5.2 guest (64 bit)
   dom0:
     root:
       precondition_type: image
       mount: /
       image: suse/suse_sles10_64b_smp_raw.tar.gz
       partition: /dev/sda2
     testprogram:
       precondition_type: package
       filename: tapper-testsuite-system.tar.gz
       path: mhentsc3/
       timeout_after_testprogram: 3600
       execname: /home/tapper/x86_64/bin/tapper_testsuite_ctcs.sh
       parameters: 
         - --report
     preconditions:
       - precondition_type: package
         filename: dhcp-3.0.3-23.33.x86_64.rpm
         path: mhentsc3/sles10/
       - precondition_type: package
         filename: dhcp-client-3.0.3-23.33.x86_64.rpm
         path: mhentsc3/sles10/
       - precondition_type: package
         filename: python-xml-2.4.2-18.7.x86_64.rpm
         path: mhentsc3/sles10/
       - precondition_type: package
         filename: bridge-utils-1.0.6-14.3.1.x86_64.rpm
         path: mhentsc3/sles10/
   # has to come BEFORE xen because config done in here is needed for xens initrd
       - precondition_type: package
         filename: network_enable_sles10.tar.gz
         path: mhentsc3/sles10/
         scripts:
           - /bin/network_enable_sles10.sh
       - precondition_type: package
         filename: xen-3.2_20080116_1546_16718_f4a57e0474af__64bit.tar.gz
         path: mhentsc3/
         scripts: ~
       - precondition_type: package
         filename: xen_installer_suse.tar.gz
         path: mhentsc3/sles10/
         scripts:
           - /bin/xen_installer_suse.pl
   # only needed for debug purpose
       - precondition_type: package
         filename: console_enable.tar.gz
         path: mhentsc3/
         scripts:
           - /bin/console_enable.sh
   guests:
     - root:
         precondition_type: copyfile
         name: osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img
         protocol: nfs
         dest: /xen/images/
         mountfile: /xen/images/redhat_rhel5u2_64b_smp_up_small_raw.img
         mountpartition: p1
         #       mountpartition: /dev/sda3 # or label or uuid
       config:
         precondition_type: copyfile
         name: /usr/share/tapper/packages/mhentsc3/001.svm
         protocol: local
         dest: /xen/images/
         filename: /xen/images/001.svm
       testprogram:
         precondition_type: copyfile
         name: /usr/share/tapper/packages/mhentsc3/testscript.pl
         protocol: local
         dest: /bin/
         timeout_after_testprogram: 100
         execname: /bin/testscript.pl
       preconditions:
         - precondition_type: package
           filename: dhclient-4.0.0-6.fc9.x86_64.rpm
           path: mhentsc3/fedora9/

8.1.6 Grub

For Xen to run correctly, the defaults grub configuration is not sufficient. You need to add another precondition to your test. System Installer will replace $root with the /dev/ notation of the root partition and $grubroot with the grub notation (including parenthesis) of the root partition. Put the resulting precondition into a file. We use /tmp/grub.yml as an example. This file may read like this:

 
  precondition_type: grub
  config: |
   serial --unit=0 --speed=115200
   terminal serial
   timeout 3
   default 0
   title XEN-test
     root $grubroot
     kernel /boot/xen.gz com1=115200,8n1 console=com1
     module /boot/vmlinuz-2.6.18.8-xen root=$root showopts console=ttyS0,115200
     module /boot/initrd-2.6.18.8-xen 

8.1.7 Order Testrun

To order your test run with the previously defined preconditions you need to stuff them into the database. Fortunatelly there are commandline tools to help you with this job. They can be found at ”'/home/tapper/perl510/bin/”'. Production server for Tapper is bancroft.amd.com. Log in to this server (as root, since user login hasn't been thoroughly tested). Make sure that $TAPPER_LIVE is set to 1 and /home/tapper/perl510/bin/ is at the beginning of your $PATH (so the correct perl will always be found). For each precondition you want to put into the database you need to define a short name. Call ”'/home/tapper/perl510/bin/tapper-testrun newprecondition”' with the appropriate options, e.g. in our example:

 
  /home/tapper/perl510/bin/tapper-testrun newprecondition --shortname=grub --condition_file=/tmp/grub.yml
  /home/tapper/perl510/bin/tapper-testrun newprecondition --shortname=xen --condition_file=/tmp/xen.yml

C<tapper-testrun> will return a precondition ID in each case. You will need those soon so please keep them in mind. In the example the precondition id for grub is 4 and for Xen its 5.

You can now put your test run into the database using /home/tapper/perl510/bin/tapper-testrun new. This expects a hostname, a test program and all preconditions. The test program is never evaluated and only there for historical reasons. Put in anything you like. root is not yet know to the database as a valid user. Thus you need to add --owner with an appropriate user. The resulting call looks like this:


  /home/tapper/perl510/bin/tapper-testrun new \\
     --hostname=bullock --precondition=4 --precondition=5 \\
     --test_program=whatever --owner=mhentsc3
 

C<tapper-testrun> new has more optional arguments, one of them being –earliest. This option defines when to start the test earliest. It defaults to "now". When the requested time has arrived, Tapper will setup the system you requested and execute your test run. Stay tuned. When everything went well, you'll see test output soon. For more information on what is going on with Tapper, see /var/log/tapper-debug.

Person in charge: Maik Hentsche


Previous: Automatic Xen testing, Up: Top

9 Tapper Development

This chapter is dedicated not to end users but to Tapper development.

9.1 Repositories

Tapper is developed using git. There is one central repository to participate on the development

 ssh://gituser@wotan/srv/gitroot/Tapper

and one mirrored public one:

 git://osrc.amd.com/tapper.git

9.2 Starting/Stopping Tapper server applications

This chapter assumes all services are deployed, as described in Deployment.

9.2.1 Live environment

The live environment is based on the host bancroft for all the server applications, like mysql db, Reports::Receiver, Reports::API, Web User Interface, MCP.

9.2.1.1 Web User Interface

The application is configured inside the Apache config and therefore only needs Apache to be (re)started. /home/tapper must be mounted.

 $ ssh root@bancroft
 $ rcapache2 restart
9.2.1.2 Reports::Receiver
 $ ssh root@bancroft
 $ /etc/init.d/tapper_reports_receiver_daemon restart
9.2.1.3 Reports::API
 $ ssh root@bancroft
 $ /etc/init.d/tapper_reports_api_daemon restart

9.2.2 Development environment

The development environment is somewhat distributed.

On host bascha there are mysql db, Reports::Receiver, Reports::API, Web User Interface.

The MCP is usually running on host siegfried, with a test target machine bullock.

9.2.2.1 Web User Interface

The application is running with its own webserver on bascha:

 $ ssh ss5@bascha

 # kill running process
 $ kill `ps auxwww|grep tapper_reports_web_server | grep -v grep | awk '{print $2}' | sort | head -1`

 # restart
 $ sudo /etc/init.d/tapper_reports_web
9.2.2.2 Reports::Receiver
 $ ssh ss5@bascha
 $ sudo /etc/init.d/tapper_reports_receiver_daemon restart
9.2.2.3 Reports::API
 $ ssh ss5@bascha
 $ sudo /etc/init.d/tapper_reports_api_daemon restart

9.2.3 Logfiles

The applications write logfiles on these places:

9.3 Deployment

This chapter is a collection of instructions how to build the Tapper toolchain.

The whole deployment process should be supported by a common build system, however that is not yet completed but done via several self-written build steps.

9.3.1 Create and upload Python packages

This is usually done by a developer on some working state that is worth to be installed in the development or live environment.

9.3.2 Create and upload Perl packages

9.3.3 Generate complete Tapper toolchain in opt-tapper package

Following are the steps to create a opt-tapper.tar.gz package in a mounted and chrooted image. It compiles Perl and Python, installs them under /opt/tapper and installs the Tapper libraries. For the Perl part it also installs all CPAN dependencies from a local mirror.

The resulting /opt/tapper subdir can be used to continuously upgrade the Tapper libs as described in the following sections later.

9.3.4 Update Python wrappers in opt-tapper package

This is usually done on an official Tapper release by the release manager.

Person in charge: Maik Hentsche, Conny Seidel, Steffen Schwigon

9.4 Installation of the Web User Interface

The web application itself is available via the NFS mounted /home/tapper. On the application server in the Apache webserver you only need to configure a Location for the path /tapper.

     bancroft$ cat /etc/apache2/conf.d/tapper_reports_web.conf
     
     Alias / /home/tapper/perl510/bin/tapper_reports_web_fastcgi_live.pl/
     <LocationMatch /tapper[.]*>
            Options ExecCGI
            Order allow,deny
            Allow from all
            AddHandler fcgid-script .pl
     </LocationMatch>

Additionally there is a reverse proxy configured on osrc.amd.com that points to the application server:

     osrc$ cat /etc/apache2/conf.d/tapper_reverse_proxy.conf
     
     ProxyRequests Off
     <Proxy *>
         Order deny,allow
         Allow from all
     </Proxy>
     ProxyPass        /tapper http://bancroft/tapper
     ProxyPassReverse /tapper http://bancroft/tapper
     ProxyPass        /hardwaredb http://bancroft/hardwaredb
     ProxyPassReverse /hardwaredb http://bancroft/hardwaredb
     

Person in charge: Steffen Schwigon

9.5 Upgrading a database schema

The database schema is maintained as description for the Object Relational Mapper DBIx::Class using some versioning and upgrading features.

Those features are accessible via the command line tool tapper-db-deploy. The basic principle is:

(We show it here for ReportsDB schema. Same applies for TestrunDB.)

9.6 Environment variables

There are some environment variables used in several contexts. Some of them are set from the automation layer to support the testsuites, some of them are used to discriminate between development and live context and some are just auxiliary variables to switch features.

Keep in mind that the variable needs to be visible where the actual component is running, which is sometimes not obvious in the client/server infrastructure.