NAME
Test::Chunks - Chunky Data Driven Testing Support
SYNOPSIS
use Test::Chunks;
use Pod::Simple;
delimiters qw(=== +++);
plan tests => 1 * chunks;
for my $chunk (chunks) {
# Note that this code is conceptual only. Pod::Simple is not so
# simple as to provide a simple pod_to_html function.
is(
Pod::Simple::pod_to_html($chunk->pod),
$chunk->text,
$chunk->description,
);
}
__END__
=== Header 1 Test
+++ pod
=head1 The Main Event
+++ html
The Main Event
=== List Test
+++ pod
=over
=item * one
=item * two
=back
+++ html
DESCRIPTION
There are many testing situations where you have a set of inputs and a
set of expected outputs and you want to make sure your process turns
each input chunk into the corresponding output chunk. Test::Chunks
allows you do this with a minimal amount of code.
Test::Chunks is optimized for input and output chunks that span multiple
lines of text.
EXPORTED FUNCTIONS
Test::Chunks extends Test::More and exports all of its functions. So you
can basically write your tests the same as Test::More. Test::Chunks
exports a few more functions though:
chunks( [data-section-name] )
The most important function is "chunks". In list context it returns a
list of "Test::Chunks::Chunk" objects that are generated from the test
specification in the "DATA" section of your test file. In scalar context
it returns the number of objects. This is useful to calculate your
Test::More plan.
Each Test::Chunks::Chunk object has methods that correspond to the names
of that object's data sections. There is also a "description" method for
accessing the description text of the object.
"chunks" can take an optional single argument, that indicates to only
return the chunks that contain a particular named data section.
Otherwise "chunks" returns all chunks.
my @all_of_my_chunks = chunks;
my @just_the_foo_chunks = chunks('foo');
run(&subroutine)
There are many ways to write your tests. You can reference each chunk
individually or you can loop over all the chunks and perform a common
operation. The "run" function does the looping for you, so all you need
to do is pass it a code block to execute for each chunk.
The "run" function takes a subroutine as an argument, and calls the sub
one time for each chunk in the specification. It passes the current
chunk object to the subroutine.
run {
my $chunk = shift;
is(process($chunk->foo), $chunk->bar, $chunk->description);
};
run_is(data_name1, data_name2)
Many times you simply want to see if two data sections are equivalent in
every chunk, probably after having been run through one or more filters.
With the "run_is" function, you can just pass the names of any two data
sections that exist in every chunk, and it will loop over every chunk
comparing the two sections.
run_is 'foo', 'bar';
NOTE: Test::Chunks will silently ignore any chunks that don't contain
both sections.
run_is_deeply(data_name1, data_name2)
Like "run_is" but uses "is_deeply" for complex data structure
comparison.
run_like(data_name, regexp | data_name);
The "run_like" function is similar to "run_is" except the second
argument is a regular expression. The regexp can either be a "qr{}"
object or a data section that has been filtered into a regular
expression.
run_like 'foo', qr{ [qw(chomp lines)],
yyy => ['yaml'],
zzz => 'eval',
};
If a filters list has only one element, the array ref is optional.
default_object()
Returns the default Test::Chunks object. This is useful if you feel the
need to do an OO operation in otherwise functional test code. See OO
below.
WWW() XXX() YYY() ZZZ()
These debugging functions are exported from the Spiffy.pm module. See
Spiffy for more info.
TEST SPECIFICATION
Test::Chunks allows you to specify your test data in an external file,
the DATA section of your program or from a scalar variable containing
all the text input.
A *test specification* is a series of text lines. Each test (or chunk)
is separated by a line containing the chunk delimiter and an optional
"description". Each chunk is further subdivided into named sections with
a line containing the data delimiter and the data section name.
Here is an example:
use Test::Chunks;
delimiters qw(### :::);
# test code here
__END__
### Test One
::: foo
a foo line
another foo line
::: bar
a bar line
another bar line
### Test Two
::: foo
some foo line
some other foo line
::: bar
some bar line
some other bar line
::: baz
some baz line
some other baz line
This example specifies two chunks. They both have foo and bar data
sections. The second chunk has a baz component. The chunk delimiter is
"###" and the data delimiter is ":::".
The default chunk delimiter is "===" and the default data delimiter is
"---".
There are two special data section names.
--- SKIP
--- ONLY
A chunk with a SKIP section causes that test to be ignored. This is
useful to disable a test temporarily.
A chunk with an ONLY section causes only that chunk to be return. This
is useful when you are concentrating on getting a single test to pass.
If there is more than one chunk with ONLY, the first one will be chosen.
FILTERS
The real power in writing tests with Test::Chunks comes from its
filtering capabilities. Test::Chunks comes with an ever growing set of
useful generic filters than you can sequence and apply to various test
chunks. That means you can specify the chunk serialization in the most
readable format you can find, and let the filters translate it into what
you really need for a test. It is easy to write your own filters as
well.
Test::Chunks allows you to specify a list of filters. The default
filters are "norm" and "trim". These filters will be applied (in order)
to the data after it has been parsed from the specification and before
it is set into its Test::Chunks::Chunk object.
You can add to the the default filter list with the "filters" function.
You can specify additional filters to a specific chunk by listing them
after the section name on a data section delimiter line.
Example:
use Test::Chunks;
filters qw(foo bar);
filters { perl => 'strict' };
sub upper { uc(shift) }
__END__
=== Test one
--- foo trim chomp upper
...
--- bar -norm
...
--- perl eval dumper
my @foo = map {
- $_;
} 1..10;
\ @foo;
Putting a "-" before a filter on a delimiter line, disables that filter.
Scalar vs List
Each filter can take either a scalar or a list as input, and will return
either a scalar or a list. Since filters are chained together, it is
important to learn which filters expect which kind of input and return
which kind of output.
For example, consider the following filter list:
norm trim lines chomp array dumper eval
The data always starts out as a single scalar string. "norm" takes a
scalar and returns a scalar. "trim" takes a list and returns a list, but
a scalar is a valid list. "lines" takes a scalar and returns a list.
"chomp" takes a list and returns a list. "array" takes a list and
returns a scalar (an anonymous array reference containing the list
elements). "dumper" takes a list and returns a scalar. "eval" takes a
scalar and creates a list.
A list of exactly one element works fine as input to a filter requiring
a scalar, but any other list will cause an exception. A scalar in list
context is considered a list of one element.
Data accessor methods for chunks will return a list of values when used
in list context, and the first element of the list in scalar context.
This usually does the right thing, but be aware.
norm
scalar => scalar
Normalize the data. Change non-Unix line endings to Unix line endings.
chomp
list => list
Remove the final newline from each string value in a list.
trim
list => list
Remove extra blank lines from the beginning and end of the data. This
allows you to visually separate your test data with blank lines.
lines
scalar => list
Break the data into an anonymous array of lines. Each line (except
possibly the last one if the "chomp" filter came first) will have a
newline at the end.
array
list => scalar
Turn a list of values into an anonymous array reference.
join
list => scalar
Join a list of strings into a scalar.
eval
scalar => list
Run Perl's "eval" command against the data and use the returned value as
the data.
regexp[=xism]
scalar => scalar
The "regexp" filter will turn your data section into a regular
expression object. You can pass in extra flags after an equals sign.
If the text contains more than one line and no flags are specified, then
the 'xism' flags are assumed.
get_url
scalar => scalar
The text is chomped and considered to be a url. Then LWP::Simple::get is
used to fetch the contents of the url.
yaml
scalar => list
Apply the YAML::Load function to the data chunk and use the resultant
structure. Requires YAML.pm.
dumper
scalar => list
Take a data structure (presumably from another filter like eval) and use
Data::Dumper to dump it in a canonical fashion.
strict
scalar => scalar
Prepend the string:
use strict;
use warnings;
to the chunk's text.
base64
scalar => scalar
Decode base64 data. Useful for binary tests.
escape
scalar => scalar
Unescape all backslash escaped chars.
Rolling Your Own Filters
Creating filter extensions is very simple. You can either write a
*function* in the "main" namespace, or a *method* in the
"Test::Chunks::Filter" namespace. In either case the text and any extra
arguments are passed in and you return whatever you want the new value
to be.
Here is a self explanatory example:
use Test::Chunks;
filters 'foo', 'bar=xyz';
sub foo {
transform(shift);
}
sub Test::Chunks::Filter::bar {
my $class = shift;
my $data = shift;
my $args = shift;
# transform $data in a barish manner
return $data;
}
Normally you'll probably just use the functional interface, although all
the builtin filters are methods.
OO
Test::Chunks has a nice functional interface for simple usage. Under the
hood everything is object oriented. A default Test::Chunks object is
created and all the functions are really just method calls on it.
This means if you need to get fancy, you can use all the object oriented
stuff too. Just create new Test::Chunks objects and use the functions as
methods.
use Test::Chunks;
my $chunks1 = Test::Chunks->new;
my $chunks2 = Test::Chunks->new;
$chunks1->delimiters(qw(!!! @@@))->spec_file('test1.txt');
$chunks2->delimiters(qw(### $$$))->spec_string($test_data);
plan tests => $chunks1->chunks + $chunks2->chunks;
# ... etc
SUBCLASSING
One of the nicest things about Test::Chunks is that it is easy to
subclass. This is very important, because in your personal project, you
will likely want to extend Test::Chunks with your own filters and other
reusable pieces of your test framework.
Here is a example of a subclass:
package MyTestStuff;
use Test::Chunks -Base;
our @EXPORT = qw(some_func);
const chunk_class => 'MyTestStuff::Chunk';
const filter_class => 'MyTestStuff::Filter';
sub some_func {
(my $self), @_ = find_my_self(@_);
...
}
package MyTestStuff::Chunk;
use base 'Test::Chunks::Chunk';
sub desc {
$self->description(@_);
}
package MyTestStuff::Filter;
use base 'Test::Chunks::Filter';
sub upper {
$self->assert_scalar(@_);
uc(shift);
}
Note that you don't have to re-Export all the functions from
Test::Chunks. That happens automatically, due to the powers of Spiffy.
OTHER COOL FEATURES
Test::Chunks automatically adds
use strict;
use warnings;
to all of your test scripts. A Spiffy feature indeed.
AUTHOR
Brian Ingerson
COPYRIGHT
Copyright (c) 2005. Brian Ingerson. All rights reserved.
This program is free software; you can redistribute it and/or modify it
under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html